SRP as part of SOLID

Clean Code Alliance organized a meetup about SOLID principles.
I had the opportunity to talk about Single Responsibility Principle at part of SOLID.
It’s a presentation I gave several times in the past.

It was fun talking about it.
There were many interesting and challenging questions, which gave me lots of things to think of.

Title:
SRP as part of SOLID

Abstract:
Single Responsibility Principle (SRP), is the part of the SOLID acronym.The SOLID principles help us design better code. Applying those principles helps us having maintainable code, less bugs and easier testing.The SRP is the foundation of having better designed code.In this session I will introduce the SOLID principles and explain in more details what SRP is all about.Applying those principles is not sci-fi, it is real, and I will demonstrate it.
Yesterday I gave a talk in a meetup about the SRP in SOLID.

Bio:
Eyal Golan is a Senior Java developer and agile practitioner. Responsible of building the high throughput, low latency server infrastructure.Manages the continuous integration and deployment of the system. Leading the coding practices. Practicing TDD, clean code. In the path for software craftsmanship.

Following me, Hayim Makabee gave a really interesting talk about The SOLID Principles Illustrated by Design Patterns

Here are the slides.

And the video (in Hebrew)

Thanks for the organizers, Boris and Itzik and mostly for the audience who seemed very interested.

Linkedin Twitter facebook github

Advertisement

Working with Legacy Test Code

Legacy Code and Smell by Tests

Working with unit tests can help in many ways to improve the code-base.
One of the aspects, which I mostly like, is that tests can point us to code smell in the production code.
For example, if a test needs large setup or assert many outputs, it can point that the unit under test doesn’t follow good design, such as SRP and other OOD.

But sometimes the tests themselves are poorly structured or designed.
In this post I will give two examples for such cases, and show how I solved it.

Test Types

(or layers)
There are several types, or layers, of tests.

  • Unit Tests
    Unit test should be simple to describe and to understand.
    Those tests should run fast. They should test one thing. One unit (method?) of work.
  • Integration Tests
    Integration tests are more vague in definition.
    What kind of modules do they check?
    Integration of several modules together? Dependency-Injector wiring?
    Test using real DB?
  • Behavioral Tests
    Those tests will verify the features.
    They may be the interface between the PM / PO to the dev team.
  • End2End / Acceptance / Staging / Functional
    High level tests. May run on production or production-like environment.

Complexity of Tests

Basically, the “higher level” the test, the more complex it is.
Also, the ratio between possible number of tests and production code increase dramatically per test level.
Unit tests will grow linearly as the code grows.
But starting with integration tests and higher level ones, the options start to grow in exponential rate.
Simple calculation:
If two classes interact with each other, and each has 2 methods, how many option should we check if we want to cover all options? And imagine that those methods have some control flow like if.

Sporadically Failing Tests

There are many reasons for a test to be “problematic”.
One of the worst is a test that sometimes fails and usually passes.
The team ignores the CI’s mails. It creates noise in the system.
You can never be sure if there’s a bug or something was broken or it’s a false alarm.
Eventually we’ll disable the CI because “it doesn’t work and it’s not worth the time”.

Integration Test and False Alarm

Any type of test is subject for false alarms if we don’t follow basic rules.
The higher test level, there’s more chance for false alarms.
In integration tests, there’s higher chance for false alarms due to external resources issues:
No internet connection, no DB connection, random miss and many more.

Our Test Environment

Our system is “quasi legacy”.
It’s not exactly legacy because it has tests. Those test even have good coverage.
It is legacy because of the way it is (un)structured and the way the tests are built.
It used to be covered only by integration tests.
In the past few months we started implementing unit tests. Especially on new code and new features.

All of our integration tests inherit from BaseTest, which inherits Spring’s AbstractJUnit4SpringContextTests.
The test’s context wires everything. About 95% of the production code.
It takes time, but even worse, it connects to real external resources, such as MongoDB and services that connect to the internet.

In order to improve tests speed, a few weeks ago I change MongoDB to embedded. It improved the running time of tests by order of magnitude.

This type of setup makes testing much harder.
It’s very difficult to mock services. The environment is not isolated from the internet and DB and much more.

After this long introduction, I want to describe two problematic tests and the way I fixed them.
Their common failing attribute was that they sometimes failed and usually passed.
However, each failed for different reason.

Case Study 1 – Creating Internet Connection in the Constructor

The first example shows a test, which sometimes failed because of connection issues.
The tricky part was, that a service was created in the constructor.
That service got HttpClient, which was also created in the constructor.

Another issue, was, that I couldn’t modify the test to use mocks instead of Spring wiring.
Here’s the original constructor (modified for the example):

private HttpClient httpClient;
private MyServiceOne myServiceOne;
private MyServiceTwo myServiceTwo;

public ClassUnderTest(PoolingClientConnectionManager httpConnenctionManager, int connectionTimeout, int soTimeout) {
	HttpParams httpParams = new BasicHttpParams();
	HttpConnectionParams.setConnectionTimeout(httpParams, connectionTimeout);
	HttpConnectionParams.setSoTimeout(httpParams, soTimeout);
	HttpConnectionParams.setTcpNoDelay(httpParams, true);
	httpClient = new DefaultHttpClient(httpConnenctionManager, httpParams);

	myServiceOne = new MyServiceOne(httpClient);
	myServiceTwo = new MyServiceTwo();
}

The tested method used myServiceOne.
And the test sometimes failed because of connection problems in that service.
Another problem was that it wasn’t always deterministic (the result from the web) and therefore failed.

The way the code is written does not enable us to mock the services.

In the test code, the class under test was injected using @Autowired annotation.

The Solution – Extract and Override Call

Idea was taken from Working Effectively with Legacy Code.

  1. Identifying what I need to fix.
    In order to make the test deterministic and without real connection to the internet, I need access for the services creation.
  2. I will introduce a protected methods that create those services.
    Instead of creating the services in the constructor, I will call those methods.
  3. In the test environment, I will create a class that extends the class under test.
    This class will override those methods and will return fake (mocked) services.

Solution’s Code

public ClassUnderTest(PoolingClientConnectionManager httpConnenctionManager, int connectionTimeout, int soTimeout) {
	HttpParams httpParams = new BasicHttpParams();
	HttpConnectionParams.setConnectionTimeout(httpParams, connectionTimeout);
	HttpConnectionParams.setSoTimeout(httpParams, soTimeout);
	HttpConnectionParams.setTcpNoDelay(httpParams, true);
	
	this.httpClient = createHttpClient(httpConnenctionManager, httpParams);
	this.myserviceOne = createMyServiceOne(httpClient);
	this.myserviceTwo = createMyServiceTwo();
}

protected HttpClient createHttpClient(PoolingClientConnectionManager httpConnenctionManager, HttpParams httpParams) {
	return new DefaultHttpClient(httpConnenctionManager, httpParams);
}

protected MyServiceOne createMyServiceOne(HttpClient httpClient) {
	return new MyServiceOne(httpClient);
}

protected MyServiceTwo createMyServiceTwo() {
	return new MyServiceTwo();
}
private MyServiceOne mockMyServiceOne = mock(MyServiceOne.class);
private MyServiceTwo mockMyServiceTwo = mock(MyServiceTwo.class);
private HttpClient mockHttpClient = mock(HttpClient.class);

private class ClassUnderTestForTesting extends ClassUnderTest {

	private ClassUnderTestForTesting(int connectionTimeout, int soTimeout) {
		super(null, connectionTimeout, soTimeout);
	}
	
	@Override
	protected HttpClient createHttpClient(PoolingClientConnectionManager httpConnenctionManager, HttpParams httpParams) {
		return mockHttpClient;
	}

	@Override
	protected MyServiceOne createMyServiceOne(HttpClient httpClient) {
		return mockMyServiceOne;
	}

	@Override
	protected MyServiceTwo createMyServiceTwo() {
		return mockMyServiceTwo;
	}
}

Now instead of wiring the class under test, I created it in the @Before method.
It accepts other services (not described here). I got those services using @Autowire.

Another note: before creating the special class-for-test, I ran all integration tests of this class in order to verify that the refactoring didn’t break anything.
I also restarted the server locally and verified everything works.
It’s important to do those verification when working with legacy code.

Case Study 2 – Statistical Tests for Random Input

The second example describes a test that failed due to random results and statistical assertion.

The code did a randomize selection between objects with similar attributes (I am simplifying here the scenario).
The Random object was created in the class’s constructor.

Simplified Example:

private Random random;

public ClassUnderTest() {
	random = new Random();
	// more stuff
}

//The method is package protected so we can test it
MyPojo select(List<MyPojo> pojos) {
	// do something
	int randomSelection = random.nextInt(pojos.size());
	// do something
	return pojos.get(randomSelection);
}

The original test did a statistical analysis.
I’ll just explain it, as it is too complicated and verbose to write it.
It had a loop of 10K iterations. Each iteration called the method under test.
It had a Map that counted the number of occurrences (returned result) per MyPojo.
Then it checked whether each MyPojo was selected at (10K / Number-Of-MyPojo) with some kind of deviation, 0.1.
Example:
Say we have 4 MyPojo instances in the list.
Then the assertion verified that each instance was selected between 2400 and 2600 times (10K / 4) with deviation of 10%.

You can expect of course that sometimes the test failed. Increasing the deviation will only reduce the number of false fail tests.

The Solution – Overload a Method

  1. Overload the method under test.
    In the overloaded method, add a parameter, which is the same as the global field.
  2. Move the code from the original method to the new one.
    Make sure you use the parameter of the method and not the class’s field. Different names can help here.
  3. Tests the newly created method with mock.

Solution Code

private Random random;

// Nothing changed in the constructor
public ClassUnderTest() {
	random = new Random();
	// more stuff
}

// Overloaded method
private select(List<MyPojo> pojos) {
	return select(pojos, this.random);
}

//The method is package protected so we can test it
MyPojo select(List<MyPojo> pojos, Random inRandom) {
	// do something
	int randomSelection = inRandom.nextInt(pojos.size());
	// do something
	return pojos.get(randomSelection);
}

Conclusion

Working with legacy code can be challenging and fun.
Working with legacy test code can be fun as well.
It feels really good to stop receiving annoying mails of failing tests.
It also increase the trust of the team on the CI process.

Linkedin Twitter facebook github

Working With Legacy Code, What does it Really Mean

At the end of January I am going to talk in Agile Practitioners 2015 TLV.
I’ll be talking about Legacy Code and how to approach it.

As the convention’s name implies, we’re talking practical stuff.

So what is practical in working with legacy code?
Is it how to extract a method? Or maybe it’s how to introduce setter for a static singleton?
Break dependency?
There are so many actions to make while working on legacy code.

But I want to stop for a minute and think.
What does it mean to work on legacy code?
How do we want the code to be after the changes?
Why do we need to change it? Do we really need to change it?

Definition
Let’s start with the definition of Legacy Code.
If you search the web you will see definitions such as “…Legacy code refers to an application system source code type that is no longer supported…” (from: techopedia)

People may think that legacy code is old, patched.

The definitions above are correct (old, patched, un-maintained, etc.), but I think that the definition coined by Michael Feathers (Working Effectively with Legacy Code) is better.
He defined legacy code as

Code Without Tests

I like to add that legacy code is usually Code that cannot be tested.
So basically, if 10 minutes ago, I wrote code which is not tested, and not testable, then it’s already Legacy Code.

Questioning the Code
When approaching code (any code), I think we should ask ourselves the following questions constantly.

  • What’s wrong with this code?
  • How do we want the code to be?
  • How can I test this piece of code?
  • What should I test?
  • Am I afraid to change this part of code?

Why Testable Code?
Why do we want to test our code?

Tests are the harness of the code.
It’s the safety net.

Imagine a circus show with trapeze. There’s a safety net below (or mattress)
The athletes can perform, knowing that nothing harmful will happen if they fall (well, maybe their pride).

Recently I went to an indie circus show.
The band was playing and a girl came to do some tricks on a high rope.
But before she even started, she fixed a mattress below.

And this is what working with legacy code is all about:
Put a mattress before you start doing tricks…
Or, in our words, add tests before you work / change the legacy code.

Think about it, the list of questions above can be answered (or thought of) just by understanding that we need to write tests to our code.
Once you put your safety net, your’re not afraid to jump.
⇒ once you write tests, you can add feature, fix bug, refactor.

Conclusion
In this post I summarized what does it mean to work with legacy code.
It’s simple:
working with legacy code, is knowing how to write tests to untested code.

The crucial thing is, understanding that we need to do that. Understanding that we need to invest the time to write those tests.
I think that this is as important as knowing the techniques themselves.

In following post(s) I will give some techniques examples.

A girl is doing trapeze with a mattress below

A girl is doing trapeze with a mattress below

Linkedin Twitter facebook github

Law of Demeter

Reduce coupling and improve encapsulation…

General
In this post I want to go over Law of Demeter (LoD).
I find this topic an extremely important for having the code clean, well-designed and maintainable.

In my experience, seeing it broken is a huge smell for bad design.
Following the law, or refactoring based on it, leads to much improved, readable and more maintainable code.

So what is Law of Demeter?
I will start by mentioning the 4 basic rules:

Law of Demeter says that a method M of object O can access / invoke methods of:

  1. O itself
  2. M’s input arguments
  3. Any object created in M
  4. O’s parameters / dependencies

These are fairly simple rules.

Let’s put this in other words:
Each unit (method) should have limited knowledge about other units.

Metaphors
The most common one is: Don’t talk to strangers

How about this:
Suppose I buy something at 7-11.
When I need to pay, will I give my wallet to the clerk so she will open it and get the money out?
Or will I give her the money directly?

How about this metaphor:
When you take your dog out for a walk, do you tell it to walk or its legs?

Why do we want to follow this rule?

  • We can change a class without having a ripple effect of changing many others.
  • We can change called methods without changing anything else.
  • Using LoD makes our tests much easier to construct. We don’t need to write so many ‘when‘ for mocks that return and return and return.
  • It improves the encapsulation and abstraction (I’ll show in the example below).
    But basically, we hide “how things work”.
  • It makes our code less coupled. A caller method is coupled only in one object, and not all of the inner dependencies.
  • It will usually model better the real world.
    Take as an example the wallet and payment.

Counting Dots?
Although usually many dots imply LoD violation, sometimes it doesn’t make sense to “merge the dots”.
Does:
getEmployee().getChildren().getBirthdays()
suggest that we do something like:
getEmployeeChildrenBirthdays() ?
I am not entirely sure.

Too Many Wrapper Classes
This is another outcome of trying to avoid LoD.
In this particular situation, I strongly believe that it’s another design smell which should be taken care of.

As always, we must have common sense while coding, cleaning and / or refactoring.

Example
Suppose we have a class: Item
The item can hold multiple attributes.
Each attribute has a name and values (it’s a multiple value attribute)

The simplest implementations would be using Map.

public class Item {
private final Map<String, Set<String>> attributes;
public Item(Map<String, Set<String>> attributes) {
this.attributes = attributes;
}
public Map<String, Set<String>> getAttributes() {
return attributes;
}
}

Let’s have a class ItemsSaver that uses the Item and attributes:
(please ignore the unstructured methods. This is an example for LoD, not SRP 🙂 )

public class ItemSaver {
private String valueToSave;
public ItemSaver(String valueToSave) {
this.valueToSave = valueToSave;
}
public void doSomething(String attributeName, Item item) {
Set<String> attributeValues = item.getAttributes().get(attributeName);
for (String value : attributeValues) {
if (value.equals(valueToSave)) {
doSomethingElse();
}
}
}
private void doSomethingElse() {
}
}

Suppose I know that it’s a single value (from the context of the application).
And I want to take it. Then the code would look like:
Set<String> attributeValues = item.getAttributes().get(attributeName);
String singleValue = attributeValues.iterator().next();
// String singleValue = item.getAttributes().get(attributeName).iterator().next();

I think that it is clear to see that we’re having a problem.
Wherever we use the attributes of the Item, we know how it works. We know the inner implementation of it.
It also makes our test much harder to maintain.

Let’s see an example of a test using mock (Mockito):
You can see imagine how much effort it should take to change and maintain it.

Item item = mock(Item.class);
Map<String, Set<String>> attributes = mock(Map.class);
Set<String> values = mock(Set.class);
Iterator<String> iterator = mock(Iterator.class);
when(iterator.next()).thenReturn("the single value");
when(values.iterator()).thenReturn(iterator);
when(attributes.containsKey("the-key")).thenReturn(true);
when(attributes.get("the-key")).thenReturn(values);
when(item.getAttributes()).thenReturn(attributes);

We can use real Item instead of mocking, but we’ll still need to create lots of pre-test data.

Let’s recap:

  • We exposed the inner implementation of how Item holds Attributes
  • In order to use attributes, we needed to ask the item and then to ask for inner objects (the values).
  • If we ever want to change the attributes implementation, we will need to make changes in the classes that use Item and the attributes. Probably a-lot classes.
  • Constructing the test is tedious, cumbersome, error-prone and lots of maintenance.

Improvement
The first improvement would be to ask let Item delegate the attributes.

public class Item {
private final Map<String, Set<String>> attributes;
public Item(Map<String, Set<String>> attributes) {
this.attributes = attributes;
}
public boolean attributeExists(String attributeName) {
return attributes.containsKey(attributeName);
}
public Set<String> values(String attributeName) {
return attributes.get(attributeName);
}
public String getSingleValue(String attributeName) {
return values(attributeName).iterator().next();
}
}

And the test becomes much simpler.
Item item = mock(Item.class);
when(item.getSingleValue("the-key")).thenReturn("the single value");

We are (almost) hiding totally the implementation of attributes from other classes.
The client classes are not aware of the implementation expect two cases:

  1. Item still knows how attributes are built.
  2. The class that creates Item (whichever it is), also knows the implementation of attributes.

The two points above mean that if we change the implementation of Attributes (something else than a map), at least two other classes will need to be change. This is a great example for High Coupling.

The Next Step Improvement
The solution above will sometimes (usually?) be enough.
As pragmatic programmers, we need to know when to stop.
However, let’s see how we can even improve the first solution.

Create a class Attributes:

public class Attributes {
private final Map<String, Set<String>> attributes;
public Attributes() {
this.attributes = new HashMap<>();
}
public boolean attributeExists(String attributeName) {
return attributes.containsKey(attributeName);
}
public Set<String> values(String attributeName) {
return attributes.get(attributeName);
}
public String getSingleValue(String attributeName) {
return values(attributeName).iterator().next();
}
public Attributes addAttribute(String attributeName, Collection<String> values) {
this.attributes.put(attributeName, new HashSet<>(values));
return this;
}
}
view raw Attributes.java hosted with ❤ by GitHub

And the Item that uses it:
public class Item {
private final Attributes attributes;
public Item(Attributes attributes) {
this.attributes = attributes;
}
public boolean attributeExists(String attributeName) {
return attributes.attributeExists(attributeName);
}
public Set<String> values(String attributeName) {
return attributes.values(attributeName);
}
public String getSingleValue(String attributeName) {
return attributes.getSingleValue(attributeName);
}
}

(Did you noticed? The implementation of attributes inside item was changed, but the test did not need to. This is thanks to the small change of delegation.)

In the second solution we improved the encapsulation of Attributes.
Now even Item does not know how it works.
We can change the implementation of Attributes without touching any other class.
We can make different implementations of Attributes:
– An implementation that holds a Set of values (as in the example).
– An implementation that holds a List of values.
– A totally different data structure that we can think of.

As long as all of our tests pass, we can be sure that everything is OK.

What did we get?

  • The code is much more maintainable.
  • Tests are simpler and more maintainable.
  • It is much more flexible. We can change implementation of Attributes (map, set, list, whatever we choose).
  • Changes in Attribute does not affect any other part of the code. Not even those who directly uses it.
  • Modularization and code reuse. We can use Attributes class in other places in the code.

Using Reflection for Testing

I am working on a presentation about the ‘Single Responsibility Principle’, based on my previous post.
It take most of my time.

In the meantime, I want to share a sample code of how I use to test inner fields in my classes.
I am doing it for a special case of testing, which is more of an integration test.
In the standard unit testing of the dependent class, I am using mocks of the dependencies.

The Facts

  1. All of the fields (and dependencies in our classes are private
  2. The class do not have getters for its dependencies
  3. We wire things up using Spring (XML context)
  4. I wan to verify that dependency interface A is wired correctly to dependent class B

One approach would be to wire everything and then run some kind of integration test of the logic.
I don’t want to do this. It will make the test hard to maintain.

The other approach is to check wiring directly.
And for that I am using reflection.

Below is a sample code of the testing method, and the usage.
Notice how I am catching the exception and throws a RuntimeException in case there is a problem.
This way, I have cleaner tested code.


// Somewhere in a different utility class for testing
@SuppressWarnings("unchecked")
public static <T> T realObjectFromField(Class<?> clazz, String fieldName, Object object) {
Field declaredField = accessibleField(clazz, fieldName);
try {
return (T) declaredField.get(object);
} catch (IllegalArgumentException | IllegalAccessException e) {
throw new RuntimeException(e);
}
}
private static Field accessibleField(Class<?> clazz, String fieldName) {
try {
Field declaredField = clazz.getDeclaredField(fieldName);
declaredField.setAccessible(true);
return declaredField;
} catch (NoSuchFieldException | SecurityException e) {
throw new RuntimeException(e);
}
}
// This is how we use it in a test method
import static mypackage.ReflectionUtils.realObjectFromField;
ItemFiltersMapperByFlag mapper = realObjectFromField(ItemsFilterExecutor.class, "filtersMapper", filterExecutor);
assertNotNull("mapper is null. Check wiring", mapper);

The Single Responsibility Principle

Introduction
In this post I would like to cover the Single Responsibility Principle (SRP).
I think that this is the basis of any clean and well designed system.

What is SRP?
The term was introduced by Robert C. Martin.
It is the ‘S’ from the SOLID principles, which are the basis for OOD.
http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)
Here’s the PDF paper for SRP by Robert C. Martin https://docs.google.com/file/d/0ByOwmqah_nuGNHEtcU5OekdDMkk/

From Wikipedia:

…In object-oriented programming, the single responsibility principle states that every class should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility….

From Clean Code:

A class or module should have one, and only one, reason to change.

So if a class (or module) needs to be modified for more than one reason, it does more than one thing. I.e. has more than one responsibility.

Why SRP?

  • Organize the code
    Let’s imagine a car mechanic who owns a repair shop.
    He has many many tools to work with. The tools are divided into types; Pliers, Screw-Drivers (Phillips / Blade), Hammers, Wrenches (Tubing / Hex) and many more.

    How would it be easier to organize the tools?
    Few drawers with different types in each one of them?
    Or, many small drawers, each containing a specific type?

    Now, imagine the drawer as the module. This is why many small modules (classes) are more organized then few large ones.

  • Less fragile
    When a class has more than one reason to be changed, it is more fragile.
    A change in one location might lead to some unexpected behavior in totally other places.
  • Low Coupling
    More responsibilities lead to higher coupling.
    The couplings are the responsibilities.
    Higher coupling leads to more dependencies, which is harder to maintain.
  • Code Changes
    Refactoring is much easier for a single responsibility module.
    If you want to get the shotgun effect, let your classes have more responsibilities.
  • Maintainability
    It’s obvious that it is much easier to maintain a small single purpose class, then a big monolithic one.
  • Testability
    A test class for a ‘one purpose class’ will have less test cases (branches).
    If a class has one purpose it will usually have less dependencies, thus less mocking and test preparing.
    The “self documentation by tests” becomes much clearer.
  • Easier Debugging
    Since I started doing TDD and test-first approach, I hardly debug. Really.
    But, there come times when I must debug in order to understand what’s going on.
    In a single responsibility class, finding the bug or the cause of the problem, becomes a much easier task.

What needs to have single responsibility?
Each part of the system.

  • The methods
  • The classes
  • The packages
  • The modules

How to Recognize a Break of the SRP?

  • Class Has Too Many Dependencies
    A constructor with too many input parameters implies many dependencies (hopefully you do inject dependencies).

    Another way too see many dependencies is by the test class.
    If you need to mock too many objects, it usually means breaking the SRP.

  • Method Has Too Many Parameters
    Same as the class’s smell. Think of the method’s parameters as dependencies.
  • The Test Class Becomes Too Complicated
    If the test has too many variants, it might suggest that the class has too many responsibilities.
    It might suggest that some methods do too much.
  • Class / Method is Long
    If a method is long, it might suggest it does too much.
    Same goes for a class.
    My rule of thumb is that a class should not exceed 200-250 LOC. Imports included 😉
  • Descriptive Naming
    If you need to describe what your class / method / package is using with the AND world, it probably breaks the SRP.
  • Class With Low Cohesion
    Cohesion is an important topic of its own and should have its own post.
    But Cohesion and SRP are closely related and it is important to mention it here.
    In general, if a class (or module) is not cohesive, it probably breaks the SRP.

    A hint for a non-cohesive class:
    The class has two fields. One field is used by some methods. The other field is used by the other methods.

  • Change In One Place Breaks Another
    If a change in the code to add a new feature or simply refactor broke a test which seems unrelated, it might suggest a breaking the SRP.
  • Shotgun Effect
    If a small change makes a big ripple in your code. If you need to change many locations it might suggest, among other smells, that the SRP is broken.
  • Unable to Encapsulate a Module
    I will explain using Spring, but the concept is important (not the implementation).
    Suppose you use the @Configuration or XML configuration.
    If you can’t encapsulate the beans in that configuration, it should give you a hint of too much responsibility.
    The Configuration should hide any inner bean and expose minimal interfaces.
    If you need to change the Configuration due to more than one reason, then, well, you know…

How to make the design compliant with the Single Responsibility Principle
The suggestions below can apply to other topics of the SOLID principles.
They are also good for any Clean Code suggestion.
But here they are aimed for the Single Responsibility Principle.

  • Awareness
    This is a general suggestion for clean code.
    We need to be aware of our code. We need to take care.
    As for SRP, we need to try and catch as early as we can a class that is responsible for too much.
    We need to always look for a ‘too big method’.
  • Testable Code
    Write your code in a way that everything can be tested.
    Then, you will surly want that your tests be simple and descriptive.
  • TDD
    (I am not going to add anything here)
  • Code Coverage Metrics
    Sometimes, when a class does too much, it won’t have 100% coverage at first shot.
    Check the code quality metrics.
  • Refactoring and Design Patterns
    For SRP, we’ll mostly do extract-method, extract-class, move-method.
    We’ll use composition and strategy instead of conditionals.
  • Clear Modularization of the System
    When using a DI injector (Spring), I think that Configuration class (or XML) can pinpoint the modules design. And modules’ single responsibility.
    I prefer to have several small to medium size of configuration files (XML or Java) than having one big file / class.
    It helps see the responsibility of the module and easier to maintain.
    I think that the configuration approach of injection has an advantage of annotation approach. Simply because the Configuration approach put the modules in the spotlight.

Conclusion
As I mentioned in the beginning of this post, I think that Single-Responsibility-Principle is the basis of a good design.
If you have this principle in your mind while designing and developing, you will have a simpler more readable code.
Better design will be followed.

One Final Note
As always, one needs to be careful on how to apply practices, code and design.
Sometimes we might do over-work and make simple things over complex.
So a common sense must be applied at any refactor and change.

Request Validation and Filtering by Flags – Redesign and Refactoring

General
In the previous posts I started describing a validation / filtering framework we’re building.
While showing the code, I am trying to show clean code, test orientation and code evolution.
It has some agility in the process; We know the end requirements, but the exact details are evolving over time.

During the development we have changed the code to be more general as we saw some patterns in it.
The code evolved as the flow evolved as well.

The flow as we now understand it
Here’s a diagram of the flow we’ll implement

Request Sequence

Request Sequence

The Pattern
At each step of the sequence (validation, filtering, action), we recognized the same pattern:

  1. We have specific implementations (filters, validations)
  2. We have an engine that wraps up the specific implementations
  3. We need to map the implementations by flag, and upon request’s flags, select the appropriate implementations.
  4. We need to have a class that calls the mapper and then the engine

A diagram showing the pattern

The Pattern

The Pattern

Source Code
In order to show some of the evolution of the code, and how refactoring changed it, I added tags in GitHub after major changes.

Code Examples
Let’s see what came up from the mapper pattern.

public interface MapperByFlag<T> {
  List<T> getOperations(Request request);
}
public abstract class AbstractMapperByFlag<T> implements MapperByFlag<T> {
  private List<T> defaultOperations;
  private Map<String, List<T>> mapOfOperations;

  public AbstractMapperByFlag(List<T> defaultOperations, Map<String, List<T>> mapOfOperations) {
    this.defaultOperations = defaultOperations;
    this.mapOfOperations = mapOfOperations;
  }

  @Override
  public final List<T> getOperations(Request request) {
    Set<T> selectedFilters = Sets.newHashSet(defaultOperations);
    Set<String> flags = request.getFlags();
    for (String flag : flags) {
      if (mapOfOperations.containsKey(flag)) {
        selectedFilters.addAll(mapOfOperations.get(flag));
      }
    }
    return Lists.newArrayList(selectedFilters);
  }
}
  public RequestValidationByFlagMapper(List<RequestValidation> defaultValidations,
    map<String, List<RequestValidation>> mapOfValidations) {
    super(defaultValidations, mapOfValidations);
  }

  public ItemFiltersByFlagMapper(List<Filter> defaultFilters, Map<String, List<Filter>> mapOfFilters) {
    super(defaultFilters, mapOfFilters);
  }

I created a test for the abstract class, to show the flow itself.
The tests of the implementations use Java Reflection to verify that the correct injected parameters are sent to the super.
I am showing the imports here as well. To have some reference for the static imports, mockito and hamcrest packages and classes.

import static org.hamcrest.Matchers.containsInAnyOrder;
import static org.junit.Assert.assertThat;
import static org.mockito.Mockito.when;

import java.util.List;
import java.util.Map;

import org.eyal.requestvalidation.model.Request;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.runners.MockitoJUnitRunner;

import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Lists;
import com.google.common.collect.Sets;

@RunWith(MockitoJUnitRunner.class)
public class AbstractMapperByFlagTest {
	private final static String FLAG_1 = "flag 1";
	private final static String FLAG_2 = "flag 2";

	@Mock
	private Request request;

	private String defaultOperation1 = "defaultOperation1";
	private String defaultOperation2 = "defaultOperation2";
	private String mapOperation11 = "mapOperation11";
	private String mapOperation12 = "mapOperation12";
	private String mapOperation23 = "mapOperation23";

	private MapperByFlag<String> mapper;

	@Before
	public void setup() {
		List<String> defaults = Lists.newArrayList(defaultOperation1, defaultOperation2);
		Map<String, List<String>> mapped = ImmutableMap.<String, List<String>> builder()
		        .put(FLAG_1, Lists.newArrayList(mapOperation11, mapOperation12))
		        .put(FLAG_2, Lists.newArrayList(mapOperation23, mapOperation11)).build();
		mapper = new AbstractMapperByFlag<String>(defaults, mapped) {
		};
	}

	@Test
	public void whenRequestDoesNotHaveFlagsShouldReturnDefaultFiltersOnly() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet());

		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(defaultOperation1, defaultOperation2));
	}

	@Test
	public void whenRequestHasFlagsNotInMappingShouldReturnDefaultFiltersOnly() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet("un-mapped-flag"));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(defaultOperation1, defaultOperation2));
	}
	
	@Test
	public void whenRequestHasOneFlagShouldReturnWithDefaultAndMappedFilters() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet(FLAG_1));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(mapOperation12, defaultOperation1, mapOperation11, defaultOperation2));
	}
	
	@Test
	public void whenRequestHasTwoFlagsShouldReturnWithDefaultAndMappedFiltersWithoutDuplications() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet(FLAG_1, FLAG_2));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(mapOperation12, defaultOperation1, mapOperation11, defaultOperation2, mapOperation23));
	}
}
@RunWith(MockitoJUnitRunner.class)
public class RequestValidationByFlagMapperTest {

	@Mock
	private List<RequestValidation> defaultValidations;
    
	@Mock
	private Map<String, List<RequestValidation>> mapOfValidations;

	@InjectMocks
	private RequestValidationByFlagMapper mapper;

	@SuppressWarnings("unchecked")
    @Test
	public void verifyParameters() throws NoSuchFieldException, SecurityException, IllegalArgumentException,
	        IllegalAccessException {
		Field defaultOperationsField = AbstractMapperByFlag.class.getDeclaredField("defaultOperations");
		defaultOperationsField.setAccessible(true);
        List<RequestValidation> actualFilters = (List<RequestValidation>) defaultOperationsField.get(mapper);
		assertThat(actualFilters, sameInstance(defaultValidations));

		Field mapOfFiltersField = AbstractMapperByFlag.class.getDeclaredField("mapOfOperations");
		mapOfFiltersField.setAccessible(true);
		Map<String, List<RequestValidation>> actualMapOfFilters = (Map<String, List<RequestValidation>>) mapOfFiltersField.get(mapper);
		assertThat(actualMapOfFilters, sameInstance(mapOfValidations));
	}
}

To Do
There are other classes that might be candidate for refactoring of some sort.
RequestFlowValidation and RequestFilter are similar.
And
RequestValidationsEngineImpl and FiltersEngine

To Do 2
Create a Matcher for the reflection part.

Code
As always, all the code can be found at:

A Tag for this post: all-components-in

Conclusion
The infrastructure is almost done.
During this time we are also implementing actual classes for the flow (validations, filters, actions).
These are not covered in the posts, nor in GitHub.
The infrastructure will be wired to a service we have using Spring.
This will be explained in future posts.

Request Validation and Filtering by Flags – Filtering an Item

On a previous post, I introduced a system requirement of validating and filtering a request by setting flags on it.

Reference: Introduction

In this post I want to show the filtering system.

Here are general UML diagrams of the filtering components and sequence.

Filtering UML Diagram

General Components

public interface Item {
        String getName();
}
public interface Request {
        Set getFlags();
        List getItems();
}

Filter Mechanism (as described in the UML above)

public interface Filter extends Predicate {
	String errorMessage();
}

FilterEngine is a cool part, which takes several Filters and apply to each the items. Below you can see the code of it. Above, the sequence diagram shows how it’s done.

public class FiltersEngine {

	public FiltersEngine() {
	}

	public ItemsFilterResponse applyFilters(List filters, List items) {
		List validItems = Lists.newLinkedList(items);
		List invalidItemInformations = Lists.newLinkedList();
		for (Filter validator : filters) {
			ItemsFilterResponse responseFromFilter = responseFromFilter(validItems, validator);
			validItems = responseFromFilter.getValidItems();
			invalidItemInformations.addAll(responseFromFilter.getInvalidItemsInformations());
		}

		return new ItemsFilterResponse(validItems, invalidItemInformations);
	}

	private ItemsFilterResponse responseFromFilter(List items, Filter filter) {
		List validItems = Lists.newLinkedList();
		List invalidItemInformations = Lists.newLinkedList();
		for (Item item : items) {
			if (filter.apply(item)) {
				validItems.add(item);
			} else {
				invalidItemInformations.add(new InvalidItemInformation(item, filter.errorMessage()));
			}
		}
		return new ItemsFilterResponse(validItems, invalidItemInformations);
	}
}

And of course, we need to test it:

@RunWith(MockitoJUnitRunner.class)
public class FiltersEngineTest {
	private final static String MESSAGE_FOR_FILTER_1 = "FILTER - 1 - ERROR";
	private final static String MESSAGE_FOR_Filter_2 = "FILTER - 2 - ERROR";
	@Mock(name = "filter 1")
	private Filter singleFilter1;
	@Mock(name = "filter 2")
	private Filter singleFilter2;
	@Mock(name = "item 1")
	private Item item1;
	@Mock(name = "item 2")
	private Item item2;

	@InjectMocks
	private FiltersEngine filtersEngine;

	@Before
	public void setup() {
		when(singleFilter1.errorMessage()).thenReturn(MESSAGE_FOR_FILTER_1);
		when(singleFilter2.errorMessage()).thenReturn(MESSAGE_FOR_Filter_2);

		when(item1.getName()).thenReturn("name1");

		when(item2.getName()).thenReturn("name2");
	}

	@Test
	public void verifyThatAllSingleFiltersAreCalledForValidItems() {
		when(singleFilter1.apply(item1)).thenReturn(true);
		when(singleFilter1.apply(item2)).thenReturn(true);
		when(singleFilter2.apply(item1)).thenReturn(true);
		when(singleFilter2.apply(item2)).thenReturn(true);

		ItemsFilterResponse response = filtersEngine.applyFilters(Lists.newArrayList(singleFilter1, singleFilter2),
				Lists.newArrayList(item1, item2));
		assertThat("expected no invalid", response.getInvalidItemsInformations(),
				emptyCollectionOf(InvalidItemInformation.class));
		assertThat(response.getValidItems(), containsInAnyOrder(item1, item2));

		verify(singleFilter1).apply(item1);
		verify(singleFilter1).apply(item2);
		verify(singleFilter2).apply(item1);
		verify(singleFilter2).apply(item2);
		verifyNoMoreInteractions(singleFilter1, singleFilter2);
	}

	@SuppressWarnings("unchecked")
	@Test
	public void itemsFailIndifferentFiltersShouldGetOnlyFailures() {
		when(singleFilter1.apply(item1)).thenReturn(false);
		when(singleFilter1.apply(item2)).thenReturn(true);
		when(singleFilter2.apply(item2)).thenReturn(false);

		ItemsFilterResponse response = filtersEngine.applyFilters(Lists.newArrayList(singleFilter1, singleFilter2),
				Lists.newArrayList(item1, item2));
		assertThat(
				response.getInvalidItemsInformations(),
				containsInAnyOrder(matchInvalidInformation(new InvalidItemInformation(item1, MESSAGE_FOR_FILTER_1)),
						matchInvalidInformation(new InvalidItemInformation(item2, MESSAGE_FOR_Filter_2))));
		assertThat(response.getValidItems(), emptyCollectionOf(Item.class));

		verify(singleFilter1).apply(item1);
		verify(singleFilter1).apply(item2);
		verify(singleFilter1).errorMessage();
		verify(singleFilter2).apply(item2);
		verify(singleFilter2).errorMessage();
		verifyNoMoreInteractions(singleFilter1, singleFilter2);
	}

	@Test
	public void firstItemFailSecondItemSuccessShouldGetOneItemInEachList() {
		when(singleFilter1.apply(item1)).thenReturn(true);
		when(singleFilter1.apply(item2)).thenReturn(true);
		when(singleFilter2.apply(item1)).thenReturn(false);
		when(singleFilter2.apply(item2)).thenReturn(true);

		ItemsFilterResponse response = filtersEngine.applyFilters(Lists.newArrayList(singleFilter1, singleFilter2),
				Lists.newArrayList(item1, item2));
		assertThat(response.getInvalidItemsInformations(), contains(matchInvalidInformation(new InvalidItemInformation(item1,
				MESSAGE_FOR_Filter_2))));
		assertThat(response.getValidItems(), containsInAnyOrder(item2));

		verify(singleFilter1).apply(item1);
		verify(singleFilter1).apply(item2);
		verify(singleFilter2).apply(item1);
		verify(singleFilter2).apply(item2);
		verify(singleFilter2).errorMessage();
		verifyNoMoreInteractions(singleFilter1, singleFilter2);
	}

	private static BaseMatcher matchInvalidInformation(InvalidItemInformation expected) {
		return new InvalidItemInformationMatcher(expected);
	}

	private final static class InvalidItemInformationMatcher extends BaseMatcher {
		private InvalidItemInformation expected;

		private InvalidItemInformationMatcher(InvalidItemInformation expected) {
			this.expected = expected;
		}

		public boolean matches(Object itemInformation) {
			InvalidItemInformation actual = (InvalidItemInformation) itemInformation;
			return actual.getName().equals(expected.getName())
					&& actual.getErrorMessage().equals(expected.getErrorMessage());
		}

		public void describeTo(Description description) {
		}
	}
}

Some explanation about the test
You can see that I don’t care about the implementation of Filter. Actually, I don’t even have any implementation of it.
I also don’t have implementation of the Item nor the request.
You can see an example of how to create a BaseMatcher to be used with assertThat(…)

Coding
Try to see whether it is ‘clean’. Can you understand the story of the code? Can you tell what the code does by reading it line by line?

On the following post I will show how I applied the flag mapping to select the correct filters for a request.

You can find all the code in: https://github.com/eyalgo/request-validation

[Edit] Created tag Filtering_an_item before refactoring.

Request Validation and Filtering by Flags – Introduction

General

We are working on a service that should accept some kind of request.

The request has List of Items. In the response we need to tell the client whether the request is valid and also some information about each item: is it valid or not. If it’s valid, it will be persisted. If it’s not, it should be filtered out. So the response can have information of how many items are valid (and sent to be persisted) and list of information of the filtered out items.

The request has another metadata in it. It has collection (set) of flags. The filtering and validation is based on the flags of the request. So basically one request may be validated and filtered differently than the other, based on the flags of each request.

We might have general validations / filters that need to be applied to any request, whatever flags it has.

Request Validation and Filtering High level design

Design

Flags Mapping

We’ll hold a mapping of flag-to-filters, and flag-to-validation.

Request

Has flags and items.

Components

Filter, Filter-Engine, Flags-Mapper

Development Approach

Bottom Up

We have a basic request already, as the service is up and running, but we don’t have any infrastructure for flags, flag-mapping, validation and filtering.

We’ll work bottom up; create the mechanism for filtering, enhance the request and then wire it up using Spring.

Coding

I’ll try to show the code using tests, and the development using some kind of TDD approach.

I am using eclipse’s EclEmma for coverage.

General

By looking at the code, you can see usage of JUnit, Mockito, Hamcrest, Google-Guava.

You can also see small classes, and interface development approach.

Source Code

https://github.com/eyalgo/request-validation

Recommended Books

I have a list of books, which I highly recommend.
Each book taught me something different.

It all begun years ago, when I went into interviewing process for my second work place.
I was a junior Java developer, a coder. I didn’t have much experience and more importantly, I did not have a mentor or someone who would direct me. I learned on my own, after a CS Java course. Java 1.4 just came.

One of my first interviewers was a great mentor. We met for an hour (probably). I don’t remember the company.  I don’t remember the job position. I don’t remember his name.
But I DO remember a few things he asked me.
He asked me if I know what TDD was. He asked me about XP.
He also recommended a book: Effective Java by Joshua Bloch

He didn’t even know what a great gift he gave me.

So I went on and bought Effective Java, 1st edition. And TDD by Kent Beck.
That was my first step towards being craftsman.

Effective Java and Refactoring
These two books look as they are not entirely related.
However, both of these books thought me a-lot about design and patterns.
I started to understand how to write code using patterns (Refactoring), and how to do it in Java (Effective).
These books gave me the grounds for best practice in Java and Design Patterns and OOD.

Test Driven Development
I can’t say enough about this book.
At first, I really didn’t understand what it was all about.
But it was part of XP !! (which I didn’t understand as well).
The TDD was left on the shelf until I was ready for it.

Clean Code and The Pragmatic Programmer
Should I say more?
If you haven’t read both, stop everything and go to read.
They are MUST for anyone who wants to be craftsman and takes his / her profession seriously.
These books are also lots of fun to read. Especially the Pragmatic book.

The Clean Coder
If you want to take the next step of being a professional, read it.
I was sometimes frustrated while reading it. I thought to myself how can pass all of this material to my teammates…

Dependency Injection
Somewhat not related, but as I see it, if you don’t use DI, you can’t write clean, testable code.
If you can’t write clean, testable code, you are missing the point of craftsmanship.
The book covers some injectors frameworks, but also describe what is it all about.

Below is a table with the books I have mentioned.

One last remark,
This list does not contain the only books I read.
During the years I have read more technical / professional books, but these made the most difference for me.

Name Author(s) ISBN
Effective Java Joshua Bloch 978-032-135-668-0
Test-Driven Development Kent Beck 978-032-114-653-3
Refactoring Martin Fowler 978-020-148-567-7
Dependency Injection Dhanji R. Prasanna 978-193-398-855-9
Clean Code Robert C. Martin 978-013-235-088-4
The Clean Coder Robert C. Martin 978-013-708-107-3
The Pragmatic Programmer Andrew Hunt , David Thomas 978-020-161-622-4