Scala: An Ambitious Language

In the object paradigm, a system consists of objects with mutable state, whereas in the functional paradigm, it consists of functions and immutable values. At first, these two worlds seem incompatible.

But not so for Odersky. In 2004 he released the first version of Scala, a language that combines both.

Scala’s roots are object-oriented, sharing the same basic constructs as Java, with whom it is fully compatible. Its functional flavor comes from several features borrowed or transposed from concepts in functional languages like Haskell.  This includes first-class and higher-order functions, including currying, but also pattern matching with case classes, and the support for monads and tail recursions.

The mariage is suprisingly elegant. Maybe the two worlds are compatible after all.

But the ambitions of Scala do not stop here. It also aims at beeing scaleable, both in terms of modularity and in terms of expressivity. Scala should support the modularisation of small and large components, and help reduce the gap between the code and the domain concepts.

The many features of the Scala’s type system enables scalability along both axes. Traits enable for instance a fined grained modularisation of object behaviors. Implicit conversions on the other hand enable existing types in libraries to be extended to express code more clearly.

But more importantly, features of the language create synergies. Abstract type members combined with type nesting enable the cake pattern, a form of dependency injection, or family polymorphism, a way to type check constellation of multiple related classes. The support of call-by-name combined with implicits enable the definition of domain specific languages.

You can’t but be amazed by how features sometimes combine. It is for instance possible to map a collection and convert its type at the same time using the special breakout object. You can even pattern match regular expressions!

Such synergies are possible because the foundations of Scala are principled.

  • First, everything is an object. There is no primitive types. Instead, the type hierarchy has two main roots, one for mutable objects (with reference semantics) and one for immutable objects (with value semantics).
  • Second, you can abstract over types, values, and functions using parametrization or abstract members. The three constructs support both forms of abstractions consistently.
  • Third, any object that defines an apply() function can be used as a function. This closes the gap between functions and objects. The inverse of apply() is unapply(). Any object that defines unapply() can be used as an extractor for pattern matching.

Take the expression “val l = List(1,2,3)”. This is not native syntax for list construction, but actually the evaluation of the function “apply” on the singleton object “List” with the arguments “1,2,3”. Or take the expression “val (x,y) = (1,2)”. This is not native syntax for multiple assignments, but tuple unpacking using extractors. These principles enable nice extensions of the language.

The flexibility of Scala has a price though: it is easy to learn Scala on the surface, but mastering its intricacies is challenging.

Also, Scala comes with many additional features that seem to exist more for convenience than necessity, making it even harder to master. It is for instance questionable wether structural typing or default parameter values, to name a few, should really have made it into the language. Clearly they are usefull and alleviate some pain points of Java, but they also distract from the essence of the language. Scala might at times appear to lack focus.

The richness of the language is acknowledged by the Scala community itself. To quote Odersky, “Scala is a bit of a chameleon. It makes many programming tasks refreshingly easy and at the same time contains some pretty intricate constructs that allow experts to design truly advanced typesafe libraries.”

Scala is a language with many very powerful features and with many ways to do things. It’s up to the developers to use the features well and enforce a consistent programming style. For corporations, these two aspects could be a barrier to adoption. In comparison, a language like Kotlin offers the same basic ingredients but is a lot more simple.

The long bet of Odersky seems to pay off though. Scala has found its audience and made its way to the industry, including top players like Twitter or LinkedIn. It has established itself as a viable alternative.

Scala is a source of innovation and inspiration. While functions were already in object-oriented languages like Smalltalk in the 80s, Scala showed that object-orientation doesn’t mean mutability. The resulting programming style “OO in the large, FP in the small” is gaining traction. Having shown that the combination works, other languages will certainly follow this path.

Ten years after its inception, Scala has a mature and vivid community of users. To gain further adoption, it must now consolidate its foundation and keep it stable across releases. Fortunately, we can still count on Odersky to continue to innovate at the same time. At the recent ScalaDays 2015, he unveiled his plan to better control mutations of state, not with monads, but implicit conversions. That is yet another ambitious challenge.

Package Visibility is Broken

In Java, classes and class members have by default package visibility. To restrict or increase the visibility of classes and class members, the access modifiers private, protected, and public must be used.

Modifier Class Package Subclass World
public Y Y Y Y
protected Y Y Y N
no modifier Y Y N N
private Y N N N

(from Controlling Access to Members)

These modifiers control encapsulation along two dimensions: one dimension is the packaging dimension, the other is the subclassing dimension. With these modifiers, it becomes possible to encapsulate code in flexible ways. Sadly, the two dimensions interfere in nasty ways.

Shadowing

A subclass might not see all methods of its superclass, and can thus redeclare a method with an existing name. This is called shadowing or name masking.  For instance, a class and its subclass can both declare a private method foo() without that overriding takes place. This situation is confusing and best to be avoided.

With package visibility, the situation gets worse. Let us consider the snippet below:

package a;
public class A {
int say() {return 1;};
}
package b;
public class B extends a.A {
int say() {return 2;};
}
package a;
class Test {
public static void main(String args[]) {
a.A a = new b.B();
System.out.println(a.say()); // prints 1, WTF!!
} }

 (from A thousand years of productivity: the JRebel Story)

The second method B.say() does not override A.say() but shadows it. Consequently, the static type at the call site defines which method will be invoked.

One could argue that everything works as intended, and that it is clear that B.say() does not override A.say() since there is no @Override annotation.

This argument makes sense when private methods are shadowed. In that case, the developer knows about the implementation of the class and can figure this out. For methods with package visibility, the argument is not acceptable since developers shouldn’t have to rely on implementation details of a class, only its visible interface.

The static types in a program should not influence the run-time semantics. The program should work the same whether the variable “a” has static type “A” or “B”.

Reflection

With reflection, programmers have the ability to inspect and invoke methods in unanticipated ways. Reflections should honor the visibility rules and authorize only legitimate actions. Unfortunately, it’s hard to define what is legitimate or not. Let us consider the snippet below:

class Super {
  @MyAnnotation
  public void methodOfSuper() {
  }
}

public class Sub extends Super {
}

Method m = Sub.class.getMethod("methodOfSuper");
m.getAnnotations(); // WTF, empty list

Clearly, the method methodOfSuper is publicly exposed by instances of the class Sub. It’s legitimate to be able to reflect upon it from another package. The class Super is however not publicly visible, and its annotations are thus ignored by the reflection machinery.

Package visibility is broken

Package-visibility is a form of visibility between private and protected: some classes have access to the member, but not all (only those in the same package). This visibility sounds appealing to bundle code in small packages, exposing the package API using the public access modifier, and letting classes within the package freely access each others. Unfortunately, as the examples above have shown, this strategy breaks in certain cases.

Accessiblitiy in Java is in a way too flexible. The combination of the fours modifiers with the possibility to inherit and “widen” the visibility of classes and class members can lead to obscure behaviors.

Simpler forms of accessibility should then be preferred. Smalltalk supports for instance inheritance, but without access modifiers; methods are always public and fields are always protected. Go, on the other hand, embraces package visibility, but got rid of inheritance. Simple solutions are easier to get right.

NOTES:

  • In “Moderne Software-Architektur: Umsichtig planen, robust bauen mit Quasar” the author argues that method level visibility makes no sense. Instead, components consist of classes, which are either exposed to the outside (the component interface) of belond to the component’s internals and are hidden (the component implementation). This goes in the direction of OSGi and the future Java module system.

Masterminds of Programming

Masterminds of Programming51-8dA--hLL features exclusive interviews with the creators of popular programming languages. Over 400+ pages, the book collects the views of these inventors over varying topics such as language design, backward compatibility, software complexity, developer productivity, or innovation.

Interestingly, there isn’t so much about language design in the book. The creation of a language seems to happen out of necessity, and the design itself is mostly the realization of an intuitive vision based on gut feelings and bold opinions. The authors’ judgments about trade-offs (e.g. static or dynamic typing, or security vs performance) are surprisingly unbalanced, and when asked to explain the rationale for some design choices, explanation tends to be rather scarce.

Instead, the authors describe with passion the influences that led them to a particular design. The book contains thus a good deal of historical information about the context in which each language was born.

  • C++ was invented to enable system programming with objects
  • Awk was invented to easily process data in a UNIX fashion
  • Basic was invented to teach students programming
  • LUA was invented to easily script components
  • Haskell was invented to unify the functional programming language community
  • SQL was invented to query relational database with an approachable language
  • Objective-C was invented to bring objects to the C world
  • Java was invented to provide a secure language in a networked world
  • C# was invented as the strategic language for the modern Microsoft platform .NET
  • UML was invented as the unification of modeling languages
  • Postscript was invented to enable flexible typesetting and printing
  • Eiffel was invented to make objects robust with contracts

Both the interviewers and interviewees are knowledgeable and articulate. The inventors smoothly distill their experience and insights during semi-structured interviews. Throughout the book, discussions remain mostly general, which both a plus and a minus: the material is accessible to all, but multiple sections have a low information density. The book could be easily shortened with a better editing.

Discussion about software engineering in general turned out to be the one I enjoyed most. Some of the interesting ideas touched in the book were for instance:

  • Simulating projects help acquire experience faster, p.254
  • Classes are units of progress in a system, p.255
  • We need of an economic model of software, p.266
  • Object-oriented programming and immutability are compatible, p.315
  • What UML is good for: useful for data modelling, moderately useful for system decomposition, not so useful for dynamic things, p.342
  • Generating code from UML is a terrible idea, p.339
  • There’s no software crisis; it’s overplayed for shock value, p.354
  • How broken HTML is, and how better it would have been if the web had started with a typesetting language like postscript, p.405

These points come from the late interviews, but there are similarly nice bits and pieces in all chapters; it just turned out that I starting taking notes only half through the book.

Amongst the recurring themes, the notion of simplicity pops out and is discussed multiple times, at the language level and a the software level. Several interviewees quote Einstein’s “Simple as possible, but not simpler”, and emphasize the concepts of minimalism and purity, each in their own way.

The book is also very good at instilling curiosity about unknown languages. I was initially tempted to skip chapters about languages I didn’t know, and am glad that I didn’t. Stack-based languages like Forth and Postscript appear as examples of a  powerful but underlooked paradigm; the chapter about awk almost reconciled me with bash scripting; and the discussion about UML made me reconsider its successthe fact that the whole industry agreed on a common notation for basic language constructs shouldn’t be taken for granted.

In conclusion, this book isn’t essential, but it is enjoyable if you are an all-rounder with some time ahead, you appreciate thinking aloud, and good discussions around a cup of coffee.

Your Language is a Start-up

Watching the TIOBE index of programming language popularity is depressing. PHP and Javascript rule the web, despite the consensus that they are horrible; Haskell and Smalltalk are relegated to academic prototyping, but unanimously praised for the conceptual purity. How technolgy adoption happens is a puzzling question.

Evidences seem to suggest that what matters is to attract a set of initial users, and then expand. The initial offer needn’t be particularly compelling. As long as it wins on one dimension  maybe because it’s ambarassingly simple, or provides a very effective solution to a very specific problem it might attract early adopters. PHP won because of its simplicity to get things done; Javascript won because it was the first to provide a solution to make HTML dynamic. After initial success in a niche, the technology can evolve to attract more users. PHP and Javascript  evolved later to fix their initial design flaws. They are both now mature object-oriented languages.

The price for fixing initial flaws is however extraordinary high. Once a language feature is designed and made available, it’s cast in stone. Evolving a language while maintaining backward compatibility is extremelly challenging, but breaking compatibility and dealing with multiple branches isn’t much of an easy solution neither. Notorious examples of evolutions in Java are the Java Memory Model and generics. It took years of research to plan them, and years of availability to reeducate the community. C++ is still trying to catch up, and still lacks feature that we take for granted on some plateforms, e.g. a standardized serialization.

Surprisingly, when adoption happens, it might be from a difference audience than the one expected.  “Languages designed for limited or local use can win a broad clientele, sometimes in environments and for applications that their designers never dreamed of.”  say the authors of Mastermind of programming. This is definitively true. Java was initially designed for embedded systems, but succeed instead in the enterprise. Javascript was thought as a thin veneer for web pages, but now powers the new generation of client-side web app and is even expanding to the server-side.

When a technology starts to decline after the adoption peak, don’t be too quick to claim it dead. It might enjoy an unexpected renaissance. Many have for instance claimed that Java was dead. They have failed however to recognize the the underlying JVM is a rocking beast, amazingly fast and versatile for those who know how to tame it. Nowadays, one of the best strategy to implement new languages is to leverage the JVM and provide interoperability with Java libraries. In turn, this massive adoption of the JVM by new language implementors is driving innovation in the JVM itself, which has been extended with new bytecode for languages other than Java. I doubt that James Gosling had anticipated this evolution of the platform.

Clearly, the key characteristics of a language are its syntax and semantics, since they define together its expressive power. Expressivity isn’t however the unique force at play for adoption. What a real-world check suggests is that expressivity is only one factor amongst many other technical and social factors. The ease of debugging or the existence of a friendly community could for instance turn out to more important for some users than the ease of writing code. Language designers typically understimate such factors, severly impeding their chances of success.

To foster adoption, one must also rekon that people are reluctant to change. What people are already familiar with must be taken in consideration: in 2008, mobile users wouldn’t have been ready for the minimalistic iOS 7 interface. They were however ready for the original skeuomorphic interface, and now that they have become familiar with it, they can get rid of the skeuomorphic ornaments. People don’t change for the sake of changing, they change to solve or problem, and they change only if the gain outweight the pain. For programming languages, the problem is productivity and the pain is learning a new platform. In 2013, developers might not be familiar enough with functional programming to adopt a pure functional language like Haskell, but they definitively are ready to adopt a hybrid language like Scala.

Together, these elements might help explain the failure of some great languages, for instance Lisp. Lisp is a beautiful programming language that offers amazing flexibility. For a skilled practitionner, lisp is a secret weapon. However, lisp does nothing particularly well out of the box. “Lisp isn’t a language, it’s a building material.”, dixit Alan Kay. Clojure, on the other hand, is a Lisp dialect with just enough direction to solve one very painful problem: writting concurrent code. Given that the problem is so painful, people won’t mind a few parenthesis to solve it. This choice paid off, and in 2012 Clojure moved in the “adopt” quadrant of Thoughwork’s technology radar.

The language business is a competitive business where idealism won’t prevail. For a language to be adopted, it must solve a problem for some early adopters, who will then create an attractive ecosystem that will convince the late majority. In other words: language designers should think of their language like a start-up.

Debunking Object-Orientation

What is an Object? What is the essence of the object pardigm? How do objects differ from other abstractions? What are their benefits? What are their pitfalls? Can we encode objects with lower building blocks? Should we have objects all the way down?

Some people think they are clever to observe that OOP has no formal, universal definition. Democracy, love and intelligence don’t either. — Tweet from Allain de Boton

Here are some essays in the quest of the truth:

And for the haters:

The Cost of Volatile

Assessing the scalability of programs and algorithms on multicore is critical. There is an important literature on locks and locking schemes, but the exact cost of volatile is less clear.

For the software composition seminar, I proposed this year a small project on the topic. The project was realized by Stefan Nüsch. He did a nice job and his results shed some light on the matter.

Essentially, we devised a benchmark where multiple threads would access objects within a pool. Each thread has a set of objects to work with. To generate contention, the sets could be fully disjointed, have partial overlap, of have a full overlap. The ratio of reads and writes per thread was also configurable.

On an AMD 64 Dual Core, the graph looks as follows:

bench_amd64On a i7 Quad Core, the graph looks as follows:

bench_i7We clearly see that different architectures have different performance profiles.

In future work, we could try to reproduce false sharing and assess the impact of other forms of data locality.

More details about the benchmark and methodology can be found in his presentation. The code in on github.

Here a some links about the semantics of volatile, and mememory management in general.

Natural Queries

Programming language research is a quest for expressivity. The aim it enable the expression of complex computations in concise and intuitive ways.

The problem is that conciseness and intuitivity are usually conflicting. Expressivity is enabled by higher-order constructs which are hard to reason about. On one hand, functional languages enable the expression of complex computations concisely, but their intuitivity is low. Integrated query languages can help, though. On the other hand, imperative languages are very intuitive, but their expressivity is low.

Let’s assume you have time series. A sentence like “pick the average of values by intervals of 5 seconds” describes a non-trivial computation. This sentence in a natural language is both intuitive and concise. Dealing with such natural queries is challenging, but feasible. This is what facebook does. An important aspect of the features is that the system gives a feedback to the user about how it understood the query. The user can then rephrase or refine the query.

The same could be integrated into development environments. Instead of expressing computations with functions, developers would first give a shot at a natural description. It would be translated into functional code by the environment. Sample data would be generated to exemplify the query, serving both as immediate feedback for the correctness of the query, and later for unit testing. If the generated code is incorrect, it is refined by the developer. The icing on the cake: the original natural query is kept as a code comment.

Why Smalltalk?

This is a question is occasionally asked by students and here is the answer.

We are not religious. This choice is not dogmatic. We do both research in programming languages and tool support for software evolution. In both cases Smalltalk is handy:

  • Programming language — Smalltalk is extermely uniform. Experimenting with a language change is faster in Smalltak, than say, Java or Ruby. Their syntax and sets of rules are bigger, which implies more work.
  • Tool Support — If you want to extend the environment with more browsers/views/features, you can do it easily. Also, tool support and meta-programming go well together. You don’t have a separation between application code and environment code. This makes the whole system very malleable to experiement with.

There exist other research platforms out there to ease experimentations in either category. They don’t match however with the versatility of Smalltalk, which remains thus a very competitive choice to consider. Usually, we mature our project to Java or Eclipse only after initial success in Smalltalk.

Smalltalk Anthology

  • Smalltalk 80: the Language and its Implementation, by Adele Goldberg
  • Smalltalk-80: Bits of History, Words of Advice, by Glenn Krasner
  • Design Principles Behind Smalltalk, by Daniel H. Ingalls
  • The early history of Smalltalk, by Alan Kay
  • Smalltalk Best Practice Patterns, by Kent Beck
  • Smalltalk: a Reflective Language, by F. Rivard
  • Back to the future: the story of Squeak, a practical Smalltalk written in itself, by Daniel H. Ingalls et al.
  • Special issue on Smalltalk, BYTE magazine 1981

Tagged Method Dispatch

In the object-oriented paradigm, objects interact with other objects by sending messages. The behavior of the object when receiving the message can be specific to the object itself, or to a class of objects.

In traditional implementation of the paradigm, an object can have only one specific behavior at a time, and objects expose their complete interface to other objects. This is both inflexible and too flexible at the same time.

Imposing a single fixed behavior is inflexible to support run-time adaptation. Not imposing any restriction on interactions between objects is too flexible since it enables interactions that certainly are illegal.

The problem of constraining interactions between objects, and enabling adaptations of objects behavior can be unified into a problem of method dispatch.

As a foundation for a satisfying method dispatch mechanism, we could use simple tags. An object might possess multiple implementation of the same method. Method implementations are associated with tags. When an object A sends a message to object B, the method selected for execution is the method matching the filters that have been enabled. A filter is essentially a set of tags. It can be enabled dynamically, or structurally.

  • Dynamic filter. Message sends can be qualified with dynamic filters.
  • Structual filter. Objects are organized in an ownership tree and each object can be associated with structural filters.

When an object A sends a message to object B, the method selected for execution is the method matching the static filters along the ownership tree between A and B, as well as all the dynamic filters enabled in the invocation chain.

Let us consider that object A owns object B, which owns object C. Object C defines two implementation of method magic:

C>>magic
   <v1,pure>
  ^42

C>>magic
   <v2,pure>
   ^24

Object B is associated to the structural tag pure. Object A sends the message magic to object C and qualifies it with a dynamic tag v2: c magic<v2>.

The set of tags that the method implementation must match is {pure,v2}. The filter pure comes from the static filter on B, and the filter v2 comes from the dynamic filter in the invocation. The message send returns by consequence 24.

Dynamic and static tags relevant for a message sends must unambiguously select one version of the methods: failure to do so result in an exception (note that if the absence of an implementation raises a “message not understood” exception, the presence of too many implementation should raise a “message over understood” exception (this is similar to “message ambiguous” in predicate dispatch). Methods can have several version tags, though. For instance, we could have 4 variants of magic with tags <v1,a>, <v1,b>, <v2,a>, <v2,a>, selected by independent qualified message sends. For convenience, the IDE could warn the user if implementations of a similar method have ambiguous tags.

Dynamic filters are best leveraged in first-class context objects. Such object have a sole method do is in charge to perform a qualified message send:

PureContext>>do: aBlock
    ^ aBlock value<pure>. 

Dynamic filters could also be removed from the invocation chain. For instance, if a variant of a method is a decorator, if can thus forward the invocation to the original method, without the decorating tag. Let us say that dynamic filter are activated with <tag>, and deactivated with </tag>.

C>>magic
  ^42

C>>magic
   <tracing>
   Transcript show: #magic. 
   ^ self magic</tracing>

If the tracing variant is selected due to a dynamic filter, it can sucessfully forward to the non-tracing variant. However, if the non-tracing magic method sends further message, they won’t be traced. An alternative to filter deactivation would be to support forwarding explicitely, similarly to proceed in aspect-oriented programming.

Methods can be added/removed to classes dynamically. (How to package class changes into module is orthogonal to the design of the mechanisms). This approach enables various related goals:

|                |    behavior    |  visibility    |
+----------------+----------------+----------------+
| dynamic filter | perspective    |    security    | 
| static filter  | strategies     |  encapsulation |

While the propose mechanism could solve known problems with a more or less unified approach, it still seems slight complicated to grasp. More effort should be put to make it practical.

References

  • Predicate dispatch
  • Context-oriented programming
  • Us
  • Composition Filters
  • Dynamic Ownership

Understanding the Visibility of Side-Effects

This entry is all about the Java Memory Model and the happens-before partial order it formalizes.

The official specification of the JMM is authoritative but unreadable. Fortunately, there are useful resources around that aim at making it more accessible:

Happens-Before

The JMM defines a partial order called happens-before on all actions in the program. Actions are reads and writes to variables, locks and unlocks of monitors etc. The rules for happens-before are as follows:

  • Program order rule. Each action in a thread happens-before every action in that thread that comes later in the program order.
  • Monitor lock rule. An unlock on a monitor lock happens-before every subsequent lock on that same monitor lock.
  • Volatile variable rule. A write to a volatile field happens-before every subsequent read of that same field.
  • Thread start rule. A call to Thread.start on a thread happens-before any other thread detects that thread has terminated, either by successfully return from Thread.join or by Thread.isAlive returning false.

The happens-before partial order provides clear guidance about what values are allowed to be returned when the data are read. In other words: it formalizes the visibility of side-effect across threads.

A read is allowed to return the value of a write 1) if that write is the last write to that variable before the read along some path in the happens-before order, or 2) if the write is not ordered with respect to that read in the happens-before order.

Instructions can be reordered as long as the happens-before order is preserved. The code below

t1 = 1;
t2 = 2;
t3 = t1+t2;
t4 = 2*t1;

can for instance be be reordered as follows

t2 = 2;
t1 = 1;
t4 = 2*t1;
t3 = t1+t2;

If two threads shared data without using any synchronization, writes of a thread are considered unordered with respect to the other thread; according to condition 2), the execution is not deterministic and the writes of a thread might or might not be visible to the other thread.

Let us consider the two threads below, and one given execution:

  T1        T2

s = s+1 

          s = s+1

s = s+1 

The second increment might or might not see the previous increment in T1. Similarly, the thrid increment might or might not see the side-effect of the second increment.

In improperly synchronized programs, a read will return the value of one the possible matching writes, according to conditions 1) and 2). This corresponds to data races. In properly synchronized programs, a read has one unique ancestor write.

  T1        T2

lock m
  |
s = s+1 
  |
unlock m  
          \  
          lock m
             |
          s = s+1
             |
          unlock m
         /
lock m
  |
s = s+1 
  |
unlock m                  

Implementing Happens-Before

Conditions 1) and 2) specify that side-effects might or might not be visible consistently to theads in improperly synchronized programs. They enable the implementation to not read the shared memory all the time, either using CPU caches or compiler optimization. The specification does however never speak of specific implementation choice like caching and optimizations.

For instance, the code

t2 = s+1
t3 = s*2

could be rewritten as by a compiler optimization

t = s
t2 = t+1
t3 = t*2

, where t caches the value of the read to s.

Typically, the compiler will emit memory barriers to flush the CPU caches. The JSR-133 Cookbook provides guidance about implementation issues.

Data Races

Failure to properly synchronize accesses to shared data is called a data race. Inversely, a program is data race free if all accesses to shared state are synchronized. But what does data race freedom exactly mean?

Is the program data race free if,

  1. accesses to shared data happen within critical section?
  2. shared data is declared volatile?
  3. accesses to shared data are never concurrent?

Issues about the memory model are typically discussed using the double-checked locking pattern, and the “pausable” thread pattern. They provide only partial answer to these questions.

Point 1 is true. If the accesses to shared data are protected by the same cricital section, there is no possible data race.

Point 2 is true also. If you define variables as volatile, the side effect will be made visible to the other thread and there will be no data race. But remember: data race freedom does not mean that the behavior is correct. Consider the trival example below:

counter = counter + 1;

Making the counter volatile won’t suffice to ensure all increments are recorded.

Point 3 holds, but requires some clarification of the underlying assumptions (Answers to my stackoverflow question failed be clear cut and authoritative). Let us consider the situation when multiple threads manipulate an object X that is plain data, but alternate their temporal execution (so that X is never accessed concurrently) via another object Y that rely on concurrency control (wait, notify, synchronize). Should fields of object X be volatile or not?

If we assume that threads are synchronized using at least one the concurrency control primitive — an not just temporal alternance thanks, say, to a well-time sleep statements — this implies the alternate acquisition of at least a lock m, as for instance in:

while(true) {
    synchronized(m) { wait(); }
    counter = counter + 1;
    synchronized(m) { notify(); }
}

There is a happens-before dependency between the increment statement and the release of the wait lock. By construction, when the lock is released, exactly one thread will acquire it and wait again. So exactly one increment will happen-after the release of the wait lock. By consequence the increment is guaranteed to see only the value of the previous increment–no need of volatile.

More