Package Visibility is Broken

In Java, classes and class members have by default package visibility. To restrict or increase the visibility of classes and class members, the access modifiers private, protected, and public must be used.

Modifier Class Package Subclass World
public Y Y Y Y
protected Y Y Y N
no modifier Y Y N N
private Y N N N

(from Controlling Access to Members)

These modifiers control encapsulation along two dimensions: one dimension is the packaging dimension, the other is the subclassing dimension. With these modifiers, it becomes possible to encapsulate code in flexible ways. Sadly, the two dimensions interfere in nasty ways.

Shadowing

A subclass might not see all methods of its superclass, and can thus redeclare a method with an existing name. This is called shadowing or name masking.  For instance, a class and its subclass can both declare a private method foo() without that overriding takes place. This situation is confusing and best to be avoided.

With package visibility, the situation gets worse. Let us consider the snippet below:

package a;
public class A {
int say() {return 1;};
}
package b;
public class B extends a.A {
int say() {return 2;};
}
package a;
class Test {
public static void main(String args[]) {
a.A a = new b.B();
System.out.println(a.say()); // prints 1, WTF!!
} }

 (from A thousand years of productivity: the JRebel Story)

The second method B.say() does not override A.say() but shadows it. Consequently, the static type at the call site defines which method will be invoked.

One could argue that everything works as intended, and that it is clear that B.say() does not override A.say() since there is no @Override annotation.

This argument makes sense when private methods are shadowed. In that case, the developer knows about the implementation of the class and can figure this out. For methods with package visibility, the argument is not acceptable since developers shouldn’t have to rely on implementation details of a class, only its visible interface.

The static types in a program should not influence the run-time semantics. The program should work the same whether the variable “a” has static type “A” or “B”.

Reflection

With reflection, programmers have the ability to inspect and invoke methods in unanticipated ways. Reflections should honor the visibility rules and authorize only legitimate actions. Unfortunately, it’s hard to define what is legitimate or not. Let us consider the snippet below:

class Super {
  @MyAnnotation
  public void methodOfSuper() {
  }
}

public class Sub extends Super {
}

Method m = Sub.class.getMethod("methodOfSuper");
m.getAnnotations(); // WTF, empty list

Clearly, the method methodOfSuper is publicly exposed by instances of the class Sub. It’s legitimate to be able to reflect upon it from another package. The class Super is however not publicly visible, and its annotations are thus ignored by the reflection machinery.

Package visibility is broken

Package-visibility is a form of visibility between private and protected: some classes have access to the member, but not all (only those in the same package). This visibility sounds appealing to bundle code in small packages, exposing the package API using the public access modifier, and letting classes within the package freely access each others. Unfortunately, as the examples above have shown, this strategy breaks in certain cases.

Accessiblitiy in Java is in a way too flexible. The combination of the fours modifiers with the possibility to inherit and “widen” the visibility of classes and class members can lead to obscure behaviors.

Simpler forms of accessibility should then be preferred. Smalltalk supports for instance inheritance, but without access modifiers; methods are always public and fields are always protected. Go, on the other hand, embraces package visibility, but got rid of inheritance. Simple solutions are easier to get right.

NOTES:

  • In “Moderne Software-Architektur: Umsichtig planen, robust bauen mit Quasar” the author argues that method level visibility makes no sense. Instead, components consist of classes, which are either exposed to the outside (the component interface) of belond to the component’s internals and are hidden (the component implementation). This goes in the direction of OSGi and the future Java module system.

The Cost of Volatile

Assessing the scalability of programs and algorithms on multicore is critical. There is an important literature on locks and locking schemes, but the exact cost of volatile is less clear.

For the software composition seminar, I proposed this year a small project on the topic. The project was realized by Stefan Nüsch. He did a nice job and his results shed some light on the matter.

Essentially, we devised a benchmark where multiple threads would access objects within a pool. Each thread has a set of objects to work with. To generate contention, the sets could be fully disjointed, have partial overlap, of have a full overlap. The ratio of reads and writes per thread was also configurable.

On an AMD 64 Dual Core, the graph looks as follows:

bench_amd64On a i7 Quad Core, the graph looks as follows:

bench_i7We clearly see that different architectures have different performance profiles.

In future work, we could try to reproduce false sharing and assess the impact of other forms of data locality.

More details about the benchmark and methodology can be found in his presentation. The code in on github.

Here a some links about the semantics of volatile, and mememory management in general.

Understanding the Visibility of Side-Effects

This entry is all about the Java Memory Model and the happens-before partial order it formalizes.

The official specification of the JMM is authoritative but unreadable. Fortunately, there are useful resources around that aim at making it more accessible:

Happens-Before

The JMM defines a partial order called happens-before on all actions in the program. Actions are reads and writes to variables, locks and unlocks of monitors etc. The rules for happens-before are as follows:

  • Program order rule. Each action in a thread happens-before every action in that thread that comes later in the program order.
  • Monitor lock rule. An unlock on a monitor lock happens-before every subsequent lock on that same monitor lock.
  • Volatile variable rule. A write to a volatile field happens-before every subsequent read of that same field.
  • Thread start rule. A call to Thread.start on a thread happens-before any other thread detects that thread has terminated, either by successfully return from Thread.join or by Thread.isAlive returning false.

The happens-before partial order provides clear guidance about what values are allowed to be returned when the data are read. In other words: it formalizes the visibility of side-effect across threads.

A read is allowed to return the value of a write 1) if that write is the last write to that variable before the read along some path in the happens-before order, or 2) if the write is not ordered with respect to that read in the happens-before order.

Instructions can be reordered as long as the happens-before order is preserved. The code below

t1 = 1;
t2 = 2;
t3 = t1+t2;
t4 = 2*t1;

can for instance be be reordered as follows

t2 = 2;
t1 = 1;
t4 = 2*t1;
t3 = t1+t2;

If two threads shared data without using any synchronization, writes of a thread are considered unordered with respect to the other thread; according to condition 2), the execution is not deterministic and the writes of a thread might or might not be visible to the other thread.

Let us consider the two threads below, and one given execution:

  T1        T2

s = s+1 

          s = s+1

s = s+1 

The second increment might or might not see the previous increment in T1. Similarly, the thrid increment might or might not see the side-effect of the second increment.

In improperly synchronized programs, a read will return the value of one the possible matching writes, according to conditions 1) and 2). This corresponds to data races. In properly synchronized programs, a read has one unique ancestor write.

  T1        T2

lock m
  |
s = s+1 
  |
unlock m  
          \  
          lock m
             |
          s = s+1
             |
          unlock m
         /
lock m
  |
s = s+1 
  |
unlock m                  

Implementing Happens-Before

Conditions 1) and 2) specify that side-effects might or might not be visible consistently to theads in improperly synchronized programs. They enable the implementation to not read the shared memory all the time, either using CPU caches or compiler optimization. The specification does however never speak of specific implementation choice like caching and optimizations.

For instance, the code

t2 = s+1
t3 = s*2

could be rewritten as by a compiler optimization

t = s
t2 = t+1
t3 = t*2

, where t caches the value of the read to s.

Typically, the compiler will emit memory barriers to flush the CPU caches. The JSR-133 Cookbook provides guidance about implementation issues.

Data Races

Failure to properly synchronize accesses to shared data is called a data race. Inversely, a program is data race free if all accesses to shared state are synchronized. But what does data race freedom exactly mean?

Is the program data race free if,

  1. accesses to shared data happen within critical section?
  2. shared data is declared volatile?
  3. accesses to shared data are never concurrent?

Issues about the memory model are typically discussed using the double-checked locking pattern, and the “pausable” thread pattern. They provide only partial answer to these questions.

Point 1 is true. If the accesses to shared data are protected by the same cricital section, there is no possible data race.

Point 2 is true also. If you define variables as volatile, the side effect will be made visible to the other thread and there will be no data race. But remember: data race freedom does not mean that the behavior is correct. Consider the trival example below:

counter = counter + 1;

Making the counter volatile won’t suffice to ensure all increments are recorded.

Point 3 holds, but requires some clarification of the underlying assumptions (Answers to my stackoverflow question failed be clear cut and authoritative). Let us consider the situation when multiple threads manipulate an object X that is plain data, but alternate their temporal execution (so that X is never accessed concurrently) via another object Y that rely on concurrency control (wait, notify, synchronize). Should fields of object X be volatile or not?

If we assume that threads are synchronized using at least one the concurrency control primitive — an not just temporal alternance thanks, say, to a well-time sleep statements — this implies the alternate acquisition of at least a lock m, as for instance in:

while(true) {
    synchronized(m) { wait(); }
    counter = counter + 1;
    synchronized(m) { notify(); }
}

There is a happens-before dependency between the increment statement and the release of the wait lock. By construction, when the lock is released, exactly one thread will acquire it and wait again. So exactly one increment will happen-after the release of the wait lock. By consequence the increment is guaranteed to see only the value of the previous increment–no need of volatile.

More