Lateral Thinking

Lateral thinking is a term coined by Edward De Bono to characterize the generation of alternative ideas, as opposed to vertical thinking, which generates ideas based on logic and stepwise refinements. Another way to explain lateral thinking in a much common way is “thinking out of the box.”

Lateral thinking is great to improve problem solving. Indeed, often finding the best solution requires a creative move to go away from the existing solution and start with a new angle.

As a reminder of the power of lateral thinking, let us take an egg and a spoon. You are doing a brunch. How do you provide assistance to help cut the egg?

With vertical thinking you might come up with this solution:


With lateral thinking, maybe with this one:


I was absolutely amazed the first time I saw this device in action. The cut is perfect. Also, I would probably never have come to this solution, no matter how long I stared at my egg.

Each time I discuss a design issue I remember my last brunch and try to take some distance with the situation to go back to the root of the problem to solve and ask: could we do this completely differently?

Sometimes the best way to cut an egg is to not cut it actually.

Gall’s Law

Gall’s law states that complex systems can only be the result of an evolutionary process, and not the result of a design from scratch:

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.  – John Gall (1975, p.71)

A complex system evolves from simpler systems by adding successive deltas of complexity. The only way to build a complex a system is through iteration. That’s what evoluation is about.

Iterations enable us to get feedback, correct and improve the system. See what works and what doesn’t. Fix mistakes.

The system must be working after each iteration. You can add new features, as long as it refines the existing system and keeps it running.

A tadpole becomes a frog by developing its legs, then its arms, and finally shrinking its tail. The frog’s legs, arms and body aren’t developed individually and assembled at the end. That’s not how evolution works.

last_thumb1367178271

Also, you can not evolve everything at once, since in the meantime the system might not work. A tadpole develops its legs, then its arms, and finally shrink its tail. Each iteration needs focus.

Gall’s law is relieving. It’s OK to not be able to handle all the complexity at once. And it’s not only you–it’s everybody.

A complex system can not be built using only theory and first principles, because there will always be details of the environment that we were not aware of. The only way to make sure something will work is to test it for real. Practice trumps theory.

Obsessing with getting it right the first time is counter productive. Just start somewhere and iterate. Too much unknown blocks our creativity. But once we have something concrete, ideas to improve come easily.

The tadpole also teaches us a lesson here: it first develops a tail, which then disappears later on. The tail is a good idea in the water, but not so much on the ground. You will have to reinvent yourself occasionally.

Unit Testing Matters

Unit testing is a simple practice that can be explained in one sentence: each method should have an associated test that verifies its correctness. This idea is very simple. What is amazing with unit testing is how powerful this simple practice actually is. At first, unit testing seems like a simple approach to prevent coding mistakes. Its main benefit seems obvious:

Unit testing guarantees that the code does what it should.

This is actually very good, since it’s remarkably easy to make programming mistakes: typo in SQL statements, improper boundary conditions, unreachable code, etc. Unit tests will detect these flaws. Shortly after, you will realize that it’s way easier to test methods that are short and simple. This confers to unit testing a second benefit:

Unit testing favors clean code.

This is also very good. Unit testing forces developers to name things and break down code with more care. This will increase the readability of the code base. Now, armed with a growing suite of tests, you will feel more secure to change business logic, at least when the change has local effects. This is a third benefit of unit testing:

Unit testing provides the safety net that enables changes

This is excellent. Fear is one of the prime factor that leads to code rot. With unit tests, you can ensure that you don’t break existing behavior, and can cleanly refactor or extend the code base. You might object that many changes are not always localized, and that unit tests don’t help in such case. But remember: a non-local changes is nothing more than a sequence of local changes. Changes at the local level represent maybe 80% of the work; the remaining 20% is about making sure that the local changes fit together. Unit tests help for the 80% of the work. Integration tests and careful thinking will do for the other 20%. As you become enamoured with unit testing, you will try to cover every line you write with unit tests. You will make it a personal challenge to achieve full coverage every time. This isn’t always easy. You will embrace dependency inversion to decouple objects, and become proficient with mocks to abstract dependencies. You will systematically separate infrastructure code from business logic. With time, your production code will be organized so that your unit tests can always obtain an instance of the object to test easily. Along the way, you will have noticed that the classes you write are more focused and easier to understand. This is the fourth benefit of unit testing:

    Unit testing improves software design

This is amazing! Unit testing will literally highlight design smells. If writing unit tests for a class is painful, your code is waiting to be refactored. Maybe it depends on global state (Yes, I look at you Singleton), maybe it depends on the environment (Yes, I look at you java.lang.System), maybe it does too much (Yes, I look at you Blob), maybe it relies too much on other classes (Yes, I look at you Feature Envy). Unit testing is “a microscope for object interactions.” Unit testing will force you to think very carefully about your dependencies and minimize them as much as possible. It will naturally promote the SOLID principles, and lead to better a decomposition of the software.

Honestly, I find it amazing that such a simple practice can lead to so many benefits. There are many practices out there that improve software development in some way. What makes unit testing special is the ridiculous asymmetry between its simplicity and its outcome.

More

The Evolution Spiral

Software evolves over time. Typically, it grows. The evolution can be considered as sequences of extensions and modifications of the software. An extension adds new units to the system but does not alter existing units; a modification alters existing units but does not add new units to the system.

Most evolutions are usually a mix of both: new units are added to the system, and existing units are modified to use them. Pure extensions are rare since only specific architectures support them, e.g. run-time discovery of the new units with meta-programming. Pure modifications are however common since many bugfixes fall into this category.

Software evolution can be seen as a spiral:

Reuse and Contracts

Extending a software means adding new client units which reuse existing units while fullfilling their contract. Reuse makes modifications harder to introduce, since modifying the contract of a unit requires changes to the clients of this unit.

How to Achieve Extensibility

The first move to make the system extensible is to make existing units reusable “as-is”. To achieve this, it is critical to define their intent and granularity judiciously. Well defined units can be composed into larger unit. Identify responsabilities that capture what a unit knows and does. Assign few responsabilities to classes, objects, functions and promote encapsulation. Ideally, assign only one responsability (the S in SOLID) to a unit. Identify  contracts and invariants. Also, balance simplicity vs. generality, use vs. reuse (the I in SOLID). Making units composable is not always easy depending on their contrats. For instance, lock-based units are notoriously hard to compose.

The second move to make the system extensible, is to make existing units “configurable” (the O in SOLID). Inheritance, generics, dependency inversion (the D in SOLID) are all mechanisms to make extensibility possible. The principle of “composition over inheritance” is a disguised promototion of dependency inversion in order to achieve polymorphic behavior. The strategy design pattern is also closely related. The “configuration” must preserves the contract of the “configured” unit. For instance, a subclass must not weaken the contract of its superclass (the L in SOLID).

How to Achieve Modifiability

The ease to modifiy the system depends all about the coupling introduce by reuse. This coupling is both a curse and a benediction. It is a benediction when all clients of a unit require the modification and no contract is modified–the change happens exactly in one place and has low impact. It is a curse when only few clients require the modification or if the contract is changed–clients must be selectively updated to accomodate the change one way or the other. The worst is of course to overlook non-obvious changes to contracts and think they are safe, for instance threading issues.

The same advices to achieve reuse “as-is” will also favor modifiability. Essentially, they promote low coupling and high cohesion, which leads to low change impact.

More

Design techniques:

Design principles:

Those are my principles, if you don’t like them, I have others  (Groucho Marx)

  • Uncle Bob’s Principle of OOD
  • Uniform access principle
  • Representation independence
  • Liskov Substitution Principle
  • Code Against Interfaces
  • Dependency injection

And also:

Anti-if Programming

If are bad, if are evil. If are against object-oriented programming. If defeats polymorphism — These popular remarks are easier said than enforced.

Ifs can pop up for various reasons, which would deserve a full study in order to build a decent taxonomy. Here is however a quick one:

  • Algorithmic if. An algorithmic if participate in an algorithm that is inherently procedural, where branching is required. No much can be done for these ones. Though they tend to increase the cyclomatic complexity, they are not the most evil. If the algorithm is inherently complex, so be it.
  • Polymorphic if. A class of object deserve some slightly different treatment each time. This is the polymorphic if. Following object oriented principles, the treatment should be pushed in the corresponding class, and voila! There are however million of reasons why we don’t want the corresponding logic to be in the class of the object to treat, defeating object oriented basics. In such case, the problem can be alleviated with visitor, decorator, or other composition techniques.
  • Strategic if. A single class deals with N different situations. The logic is still essentially the same, but there are slight different in each situation. This situation can be refactored with an interface, and several implementations. The implementations can inherit from each other, or use delegation to maximize reuse.
  • Dynamic if. Strategic if assumes that the situation doesn’t change for the lifetime of the object. If the behavior of the object needs to change dynamically, the situation becomes even more complicated. Chances are that attributes will be used to enable/disable certain behavior at run-time. Such if can be refactored with patterns such as decorators.
  • Null if. Test for nullity is so common that it deserves a special category, even though it could be seen as a special case of another category.  Null if can happen to test the termination of an algorithm, the non-existence of data, sanity check, etc. Various techniques exist to eradict such if depending on the situation: Null Object pattern, add as many method signature as required, introducing polymorphism, usage of assertions, etc.

A step-by-step Anti-If refactoring

Here is a step-by-step refactoring of a strategic if I came accross . I plan to send it to the anti-If compaign and took then the time to document it. The code comes from the txfs project.

Let’s start with the original code:

public void writeFile (String destFileName, InputStream data, boolean overwrite)
throws TxfsException
{
FileOutputStream fos = null;
BufferedOutputStream bos = null;
boolean isNew = false;
File f = new File (infos.getPath (), destFileName);
if ( !overwrite && f.exists () )
{
throw new TxfsException ("Error writing in file (file already exist):" +
destFileName);
}

try
{
if ( !f.exists () )
{
isNew = true;
}
DirectoryUtil.mkDirs (f.getParentFile ());
try
{
Copier.copy (data, f);
}
finally
{
if ( isNew && isInTransaction () )
{
addCreatedFile (destFileName);
}
IOUtils.closeInputStream (data);
}
}
catch ( IOException e )
{
throw new TxfsException (“Error writing in file:” + destFileName, e);
}
}

Not very straightforward, isn’t it? The logic is however quite simple: if the overwrite flag is set, file can be written even if it already exists, otherwise an exception must be thrown. In addition to that, if a transaction is active, the file must be added to the list of created file, so that they can be removed in the transaction is rolled back later.  The file must be added even if an exception occurs, for instance if the file was partially copied.

What happens is that we have two concerns: (1) the overwrite rule and (2) the transaction rule.

Let’s try to refactor that with inheritance. A base class implements the logic when there is no transaction. And a subclass refines it to support transactions.

public void writeFile (String destFileName, InputStream data, boolean overwrite)
throws TxfsException
{
FileOutputStream fos = null;
BufferedOutputStream bos = null;
boolean isNew = false;
File f = new File (infos.getPath (), destFileName);
if ( !overwrite && f.exists () )
{
throw new TxfsException ("Error writing in file (file already exist):" +
destFileName);
}

try
{
// if ( !f.exists () )
// {
//    isNew = true;
// }
DirectoryUtil.mkDirs (f.getParentFile ());
try
{
Copier.copy (data, f);
}
finally
{
// if ( isNew && isInTransaction () )
// {
//     addCreatedFile (destFileName);
// }
IOUtils.closeInputStream (data);
}
}
catch ( IOException e )
{
throw new TxfsException (“Error writing in file:” + destFileName, e);
}
}

The transaction concern is removed from the base method. The overridden method looks then like:

public void writeFile (String destFileName, InputStream data, boolean overwrite)
throws TxfsException
{
try
{
super.writeFile( destFileName, data, overwrite );
}
finally
{
addCreatedFile (destFileName);
}
}

But we have then two problems: (1) we don’t know if the file is new, and it’s always added to the list of created file. (2) if the base method throw an exception because the file already exists and the flag is false, we still add it to the list of created file when we shouldn’t.

We could change the base method to have a return code (e.g. FileCreated and NoFileCreated). But return code are not generally a good solution and are quite ugly.

No, what we must do, is remove some responsibility to the method. We then split it into two methods createFile and writeFile. One expects the file to not exsits, the other the file to exists.

void writeFile (String dst, InputStream data ) throws TxfsException
void createFile (String dst, InputStream data ) throws TxfsException

(The method writeFile which takes the additional overwrite flag can be composed out of the two previous one )

So our simplified writeFile method looks like:

public void createFile(String destFileName, InputStream data)
throws TxfsException
{
try
{
super.createFile(destFileName, data);
}
finally
{
// we add the file in any case
addCreatedFile(destFileName);
}
}

Alas, there is still the problem that if super.writeFile fails because the file already exists, we add it to the list.

And here we start to realize that our exception handling scheme was a bit weak. The general TxfsException is rather useless: we need to refine the exception handling to convey sufficient information to support meaningful treatment.

void writeFile (String dst, InputStream data )
throws TxfsException, TxfsFileDoesNotExsistException

void createFile (String dst, InputStream data )
throws TxfsException, TxfsFileExistsAlreadyException

Here is then the final code:

public void createFile(String destFileName, InputStream data)
throws TxfsFileAlreadyExistsException, TxfsException
{
// I can not use a finally block because
// if failure happens because file already existed       

// I must not add it the list
try {
super.createFile(destFileName, data);
// we add the file if creation succeeds
addCreatedFile(destFileName);
}
catch (TxfsFileAlreadyExistsException e)
{
// we don't add the file if it already exists
throw e;
}
catch (Throwable e)
{
// we add the file in any other case
addCreatedFile(destFileName);
}
}
}

Conclusion

Anti-If refactoring is not so easy. There are various kind of ifs, some of which are ok, and some of which are bad. Removing ifs can imply changing the design significantly, including the inheritance hierarchy and the exception hierarchy.

A tool to analyze and propose refactoring suggestion would be cool, but for the time being, NoIF refactoring is probably in the hands of developers which can ensure the semantics equivalence of the code before and after the refactoring.