SRP as part of SOLID

Clean Code Alliance organized a meetup about SOLID principles.
I had the opportunity to talk about Single Responsibility Principle at part of SOLID.
It’s a presentation I gave several times in the past.

It was fun talking about it.
There were many interesting and challenging questions, which gave me lots of things to think of.

Title:
SRP as part of SOLID

Abstract:
Single Responsibility Principle (SRP), is the part of the SOLID acronym.The SOLID principles help us design better code. Applying those principles helps us having maintainable code, less bugs and easier testing.The SRP is the foundation of having better designed code.In this session I will introduce the SOLID principles and explain in more details what SRP is all about.Applying those principles is not sci-fi, it is real, and I will demonstrate it.
Yesterday I gave a talk in a meetup about the SRP in SOLID.

Bio:
Eyal Golan is a Senior Java developer and agile practitioner. Responsible of building the high throughput, low latency server infrastructure.Manages the continuous integration and deployment of the system. Leading the coding practices. Practicing TDD, clean code. In the path for software craftsmanship.

Following me, Hayim Makabee gave a really interesting talk about The SOLID Principles Illustrated by Design Patterns

Here are the slides.

And the video (in Hebrew)

Thanks for the organizers, Boris and Itzik and mostly for the audience who seemed very interested.

Linkedin Twitter facebook github

Advertisement

Dropwizard, MongoDB and Gradle Experimenting

Introduction

I created a small project using Dropwizard, MongoDB and Gradle.
It actually started as an experimenting Guava cache as buffer for sending counters to MongoDB (or any other DB).
I wanted to try Gradle with MongoDB plugin as well.
Next, I wanted to create some kind of interface to check this framework and I decided to try out DropWizard.
And this is how this project was created.

This post is not a tutorial of using any of the chosen technologies.
It is a small showcase, which I did as an experimentation.
I guess there are some flaws and maybe I am not using all “best practices”.
However, I do believe that the project, with the help of this post, can be a good starting point for the different technologies I used.
I also tried to show some design choices, which help achieving SRP, decoupling, cohesion etc.

I decided to begin the post with the use-case description and how I implemented it.
After that, I will explain what I did with Gradle, MongoDB (and embedded) and Dropwizard.

Before I begin, here’s the source code:
https://github.com/eyalgo/CountersBuffering

The Use-Case: Counters With Buffer

We have some input requests into our servers.
During the process of a request, we choose to “paint” it with some data (decided by some logic).
Some requests will be painted by Value-1, some by Value-2, etc. Some will not be painted at all.
We want to limit the number of painted requests (per paint value).
In order to have limit, for each paint-value, we know the maximum, but also need to count (per paint value) the number of painted requests.
As the system has several servers, the counters should be shared by all servers.

The latency is crucial. Normally we get 4-5 milliseconds per request processing (for all the flow. Not just the painting).
So we don’t want that increasing the counters will increase the latency.
Instead, we’ll keep a buffer, the client will send ‘increase’ to the buffer.
The buffer will periodically increase the repository with “bulk incremental”.

I know it is possible to use directly Hazelcast or Couchbase or some other similar fast in-memory DB.
But for our use-case, that was the best solution.

The principle is simple:

  • The dependent module will call a service to increase a counter for some key
  • The implementation keeps a buffer of counters per key
  • It is thread safe
  • The writing happens in a separate thread
  • Each write will do a bulk increase
Counters High Level Design

Counters High Level Design

Buffer

For the buffer, I used Google Guava cache.

Buffer Structure

private final LoadingCache<Counterable, BufferValue> cache;
...

this.cache = CacheBuilder.newBuilder()
	.maximumSize(bufferConfiguration.getMaximumSize())
	.expireAfterWrite(bufferConfiguration.getExpireAfterWriteInSec(), TimeUnit.SECONDS)
	.expireAfterAccess(bufferConfiguration.getExpireAfterAccessInSec(), TimeUnit.SECONDS)
	.removalListener((notification) -> increaseCounter(notification))
	.build(new BufferValueCacheLoader());
...

(Counterable is described below)

BufferValueCacheLoader implements the interface CacheLoader.
When we call increase (see below), we first get from the cache by key.
If the key does not exist, the loader returns value.

public class BufferValueCacheLoader extends CacheLoader<Counterable, BufferValue> {
	@Override
	public BufferValue load(Counterable key) {
		return new BufferValue();
	}
}

BufferValue wraps an AtomicInteger (I would need to change it to Long at some point)

Increase the Counter

public void increase(Counterable key) {
	BufferValue meter = cache.getUnchecked(key);
	int currentValue = meter.increment();
	if (currentValue > threashold) {
		if (meter.compareAndSet(currentValue, currentValue - threashold)) {
			increaseCounter(key, threashold);
		}
	}
}

When increasing a counter, we first get current value from cache (with the help of the loader. As descried above).
The compareAndSet will atomically check if has same value (not modified by another thread).
If so, it will update the value and return true.
If success (returned true), the the buffer calls the updater.

View the buffer

After developing the service, I wanted a way to view the buffer.
So I implemented the following method, which is used by the front-end layer (Dropwizard’s resource).
Small example of Java 8 Stream and Lambda expression.

return ImmutableMap.copyOf(cache.asMap())
	.entrySet().stream()
	.collect(
		Collectors.toMap((entry) -> entry.getKey().toString(),
		(entry) -> entry.getValue().getValue()));

MongoDB

I chose MongoDB because of two reasons:

  1. We have similar implementation in our system, which we decided to use MongoDB there as well.
  2. Easy to use with embedded server.

I tried to design the system so it’s possible to choose any other persist implementation and change it.

I used morphia as the MongoDB client layer instead of using directly the Java client.
With Morphia you create a dao, which is the connection to a MongoDB collection.
You also declare a simple Java Bean (POJO), that represent a document in a collection.
Once you have the dao, you can do operations on the collection the “Java way”, with fairly easy API.
You can have queries and any other CRUD operations, and more.

I had two operations: increasing counter and getting all counters.
The services implementations do not extend Morphia’s BasicDAO, but instead have a class that inherits it.
I used composition (over inheritance) because I wanted to have more behavior for both services.

In order to be consistent with the key representation, and to hide the way it is implemented from the dependent code, I used an interface: Counterable with a single method: counterKey().

public interface Counterable {
	String counterKey();
}
final class MongoCountersDao extends BasicDAO<Counter, ObjectId> {
	MongoCountersDao(Datastore ds) {
		super(Counter.class, ds);
	}
}

Increasing the Counter

@Override
protected void increaseCounter(String key, int value) {
	Query<Counter> query = dao.createQuery();
	query.criteria("id").equal(key);
	UpdateOperations<Counter> ops = dao.getDs().createUpdateOperations(Counter.class).inc("count", value);
	dao.getDs().update(query, ops, true);
}

Embedded MongoDB

In order to run tests on the persistence layer, I wanted to use an in-memory database.
There’s a MongoDB plugin for that.
With this plugin you can run a server by just creating it on runtime, or run as goal in maven / task in Gradle.
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo
https://github.com/sourcemuse/GradleMongoPlugin

Embedded MongoDB on Gradle

I will elaborate more on Gradle later, but here’s what I needed to do in order to set the embedded mongo.

dependencies {
	// More dependencies here
	testCompile 'com.sourcemuse.gradle.plugin:gradle-mongo-plugin:0.4.0'
}

Setup Properties

mongo {
	//	logFilePath: The desired log file path (defaults to 'embedded-mongo.log')
	logging 'console'
	mongoVersion 'PRODUCTION'
	port 12345
	//	storageLocation: The directory location from where embedded Mongo will run, such as /tmp/storage (defaults to a java temp directory)
}

Embedded MongoDB Gradle Tasks

startMongoDb will just start the server. It will run until stopping it.
stopMongoDb will stop it.
startManagedMongoDb test , two tasks, which will start the embedded server before the tests run. The server will shut down when the jvm finishes (the tests finish)

Gradle

https://gradle.org/
Although I only touch the tip of the iceberg, I started seeing the strength of Gradle.
It wasn’t even that hard setting up the project.

Gradle Setup

First, I created a Gradle project in eclipse (after installing the plugin).
I needed to setup the dependencies. Very simple. Just like maven.

One Big JAR Output

When I want to create one big jar from all libraries in Maven, I use the shade plugin.
I was looking for something similar, and found gradle-one-jar pluging.
https://github.com/rholder/gradle-one-jar
I added that plugin
apply plugin: 'gradle-one-jar'
Added one-jar to classpath:

buildscript {
	repositories { mavenCentral() }
	dependencies {
		classpath 'com.sourcemuse.gradle.plugin:gradle-mongo-plugin:0.4.0'
		classpath 'com.github.rholder:gradle-one-jar:1.0.4'
	}
}

And added a task:

mainClassName = 'org.eyalgo.server.dropwizard.CountersBufferApplication'
task oneJar(type: OneJar) {
	mainClass = mainClassName
	archiveName = 'counters.jar'
	mergeManifestFromJar = true
}

Those were the necessary actions I needed to do in order to make the application run.

Dropwizard

Dropwizard is a stack of libraries that makes it easy to create web servers quickly.
It uses Jetty for HTTP and Jersey for REST. It has other mature libraries to create complicated services.
It can be used as an easy developed microservice.

As I explained in the introduction, I will not cover all of Dropwizard features and/or setup.
There are plenty of sites for that.
I will briefly cover the actions I did in order to make the application run.

Gradle Run Task

run { args 'server', './src/main/resources/config/counters.yml' }
First argument is server. Second argument is the location of the configuration file.
If you don’t give Dropwizard the first argument, you will get a nice error message of the possible options.

positional arguments:
  {server,check}         available commands

I already showed how to create one jar in the Gradle section.

Configuration

In Dropwizard, you setup the application using a class that extends Configuration.
The fields in the class should align to the properties in the yml configuration file.

It is a good practice to put the properties in groups, based on their usage/responsibility.
For example, I created a group for mongo parameters.

In order for the configuration class to read the sub groups correctly, you need to create a class that align to the properties in the group.
Then, in the main configuration, add this class as a member and mark it with annotation: @JsonProperty.
Example:

@JsonProperty("mongo")
private MongoServicesFactory servicesFactory = new MongoServicesFactory();
@JsonProperty("buffer")
private BufferConfiguration bufferConfiguration = new BufferConfiguration();

Example: Changing the Ports

Here’s part of the configuration file that sets the ports for the application.

server:
  adminMinThreads: 1
  adminMaxThreads: 64
  applicationConnectors:
    - type: http
      port: 9090
  adminConnectors:
    - type: http
      port: 9091

Health Check

Dropwizard gives basic admin API out of the box. I changed the port to 9091.
I created a health check for MongoDB connection.
You need to extend HealthCheck and implement check method.

private final MongoClient mongo;
...
protected Result check() throws Exception {
	try {
		mongo.getDatabaseNames();
		return Result.healthy();
	} catch (Exception e) {
		return Result.unhealthy("Cannot connect to " + mongo.getAllAddress());
	}
}

Other feature are pretty much self-explanatory or simple as any getting started tutorial.

Ideas for Enhancement

The are some things I may try to add.

  • Add tests to the Dropwizard section.
    This project started as PoC, so I, unlike usually, skipped the tests in the server part.
    Dropwizard has Testing Dropwizard, which I want to try.
  • Different persistence implementation. (couchbase? Hazelcast?).
  • Injection using Google Guice. And with help of that, inject different persistence implementation.

That’s all.
Hope that helps.

Source code: https://github.com/eyalgo/CountersBuffering

Linkedin Twitter facebook github

Why Abstraction is Really Important

Abstraction
Abstraction is one of the key elements of good software design.
It helps encapsulate behavior. It helps decouple software elements. It helps having more self-contained modules. And much more.

Abstraction makes the application extendable in much easier way. It makes refactoring much easier.
When developing with higher level of abstraction, you communicate the behavior and less the implementation.

General
In this post, I want to introduce a simple scenario that shows how, by choosing a simple solution, we can get into a situation of hard coupling and rigid design.

Then I will briefly describe how we can avoid situation like this.

Case study description
Let’s assume that we have a domain object called RawItem.

public class RawItem {
    private final String originator;
    private final String department;
    private final String division;
    private final Object[] moreParameters;
    
    public RawItem(String originator, String department, String division, Object... moreParameters) {
        this.originator = originator;
        this.department = department;
        this.division = division;
        this.moreParameters = moreParameters;
    }
}

The three first parameters represent the item’s key.
I.e. An item comes from an originator, a department and a division.
The “moreParameters” is just to emphasize the item has more parameters.

This triplet has two basic usages:
1. As key to store in the DB
2. As key in maps (key to RawItem)

Storing in DB based on the key
The DB tables are sharded in order to evenly distribute the items.
Sharding is done by a hash key modulo function.
This function works on a string.

Suppose we have N shards tables: (RAW_ITEM_REPOSITORY_00, RAW_ITEM_REPOSITORY_01,..,RAW_ITEM_REPOSITORY_NN),
then we’ll distribute the items based on some function and modulo:

String rawKey = originator + "_"  + department + "_" + division;
// func is String -> Integer function, N = # of shards
// Representation of the key is described below
int shard = func(key)%N;

Using the key in maps
The second usage for the triplet is mapping the items for fast lookup.
So, when NOT using abstraction, the maps will usually look like:

Map<String, RawItem> mapOfItems = new HashMap<>();
// Fill the map...

“Improving” the class
We see that we have common usage for the key as string, so we decide to put the string representation in the RawItem.

// new member
private final String key;

// in the constructor:
this.key = this.originator + "_" + this.department + "_"  + this.division;

// and a getter
public String getKey() {
  return key;
}

Assessment of the design
There are two flows here:
1. Coupling between the sharding distribution and the items’ mapping
2. The mapping key is strict. any change forces change in the key, which might introduce hard to find bugs

And then comes a new requirement
Up until now, the triplet: originator, department and division made up a key of an item.
But now, a new requirement comes in.
A division can have subdivision.
It means that, unlike before, we can have two different items from the same triplet. The items will differ by the subdivision attribute.

Difficult to change
Regarding the DB distribution, we’ll need to keep the concatenated key of the triplet.
We must keep the modulo function the same. So distribution will remain using the triplets, but the schema will change and hava ‘subdivision’ column as well.
We’ll change the queries to use the subdivision together with original key.

In regard to the mapping, we’ll need to do a massive refactoring and to pass an ItemKey (see below) instead of just String.

Abstraction of the key
Let’s create ItemKey

public class ItemKey {
    private final String originator;
    private final String department;
    private final String division;
    private final String subdivision;

    public ItemKey(String originator, String department, String division, String subdivision) {
        this.originator = originator;
        this.department = department;
        this.division = division;
        this.subdivision = subdivision;
    }

    public String asDistribution() {
        return this.originator + "_" + this.department + "_"  + this.division;
    }
}

And,

Map<ItemKey, RawItem> mapOfItems = new HashMap<>();
// Fill the map...
    // new constructor for RawItem
    public RawItem(ItemKey itemKey, Object... moreParameters) {
        // fill the fields
    }

Lesson Learned and conclusion
I wanted to show how a simple decision can really hurt.

And, how, by a small change, we made the key abstract.
In the future the key can have even more fields, but we’ll need to change only the inner implementation of it.
The logic and mapping usage should not be changed.

Regarding the change process,
I haven’t described how to do the refactoring, as it really depends on how the code looks like and how much is it tested.
In our case, some parts were easy, while others were really hard. The hard parts were around code that was looking deep in the implementation of the key (string) and the item.

This situation was real
We actually had this flow in our design.
Everything was fine for two years, until we had to change the key (add the subdivision).
Luckily all of our code is tested so we could see what breaks and fix it.
But it was painful.

There are two abstraction that we could have initially implement:
1. The more obvious is using a KEY class (as describe above). Even if it only has one String field
2. Any map usage need to be examined whether we’ll benefit by hiding it using abstraction

The second abstraction is harder to grasp and to fully understand and implement.

So,
do abstraction, tell a story and use the interfaces and don’t get into details while telling it.

Linkedin Twitter facebook github

Law of Demeter

Reduce coupling and improve encapsulation…

General
In this post I want to go over Law of Demeter (LoD).
I find this topic an extremely important for having the code clean, well-designed and maintainable.

In my experience, seeing it broken is a huge smell for bad design.
Following the law, or refactoring based on it, leads to much improved, readable and more maintainable code.

So what is Law of Demeter?
I will start by mentioning the 4 basic rules:

Law of Demeter says that a method M of object O can access / invoke methods of:

  1. O itself
  2. M’s input arguments
  3. Any object created in M
  4. O’s parameters / dependencies

These are fairly simple rules.

Let’s put this in other words:
Each unit (method) should have limited knowledge about other units.

Metaphors
The most common one is: Don’t talk to strangers

How about this:
Suppose I buy something at 7-11.
When I need to pay, will I give my wallet to the clerk so she will open it and get the money out?
Or will I give her the money directly?

How about this metaphor:
When you take your dog out for a walk, do you tell it to walk or its legs?

Why do we want to follow this rule?

  • We can change a class without having a ripple effect of changing many others.
  • We can change called methods without changing anything else.
  • Using LoD makes our tests much easier to construct. We don’t need to write so many ‘when‘ for mocks that return and return and return.
  • It improves the encapsulation and abstraction (I’ll show in the example below).
    But basically, we hide “how things work”.
  • It makes our code less coupled. A caller method is coupled only in one object, and not all of the inner dependencies.
  • It will usually model better the real world.
    Take as an example the wallet and payment.

Counting Dots?
Although usually many dots imply LoD violation, sometimes it doesn’t make sense to “merge the dots”.
Does:
getEmployee().getChildren().getBirthdays()
suggest that we do something like:
getEmployeeChildrenBirthdays() ?
I am not entirely sure.

Too Many Wrapper Classes
This is another outcome of trying to avoid LoD.
In this particular situation, I strongly believe that it’s another design smell which should be taken care of.

As always, we must have common sense while coding, cleaning and / or refactoring.

Example
Suppose we have a class: Item
The item can hold multiple attributes.
Each attribute has a name and values (it’s a multiple value attribute)

The simplest implementations would be using Map.

public class Item {
private final Map<String, Set<String>> attributes;
public Item(Map<String, Set<String>> attributes) {
this.attributes = attributes;
}
public Map<String, Set<String>> getAttributes() {
return attributes;
}
}

Let’s have a class ItemsSaver that uses the Item and attributes:
(please ignore the unstructured methods. This is an example for LoD, not SRP 🙂 )

public class ItemSaver {
private String valueToSave;
public ItemSaver(String valueToSave) {
this.valueToSave = valueToSave;
}
public void doSomething(String attributeName, Item item) {
Set<String> attributeValues = item.getAttributes().get(attributeName);
for (String value : attributeValues) {
if (value.equals(valueToSave)) {
doSomethingElse();
}
}
}
private void doSomethingElse() {
}
}

Suppose I know that it’s a single value (from the context of the application).
And I want to take it. Then the code would look like:
Set<String> attributeValues = item.getAttributes().get(attributeName);
String singleValue = attributeValues.iterator().next();
// String singleValue = item.getAttributes().get(attributeName).iterator().next();

I think that it is clear to see that we’re having a problem.
Wherever we use the attributes of the Item, we know how it works. We know the inner implementation of it.
It also makes our test much harder to maintain.

Let’s see an example of a test using mock (Mockito):
You can see imagine how much effort it should take to change and maintain it.

Item item = mock(Item.class);
Map<String, Set<String>> attributes = mock(Map.class);
Set<String> values = mock(Set.class);
Iterator<String> iterator = mock(Iterator.class);
when(iterator.next()).thenReturn("the single value");
when(values.iterator()).thenReturn(iterator);
when(attributes.containsKey("the-key")).thenReturn(true);
when(attributes.get("the-key")).thenReturn(values);
when(item.getAttributes()).thenReturn(attributes);

We can use real Item instead of mocking, but we’ll still need to create lots of pre-test data.

Let’s recap:

  • We exposed the inner implementation of how Item holds Attributes
  • In order to use attributes, we needed to ask the item and then to ask for inner objects (the values).
  • If we ever want to change the attributes implementation, we will need to make changes in the classes that use Item and the attributes. Probably a-lot classes.
  • Constructing the test is tedious, cumbersome, error-prone and lots of maintenance.

Improvement
The first improvement would be to ask let Item delegate the attributes.

public class Item {
private final Map<String, Set<String>> attributes;
public Item(Map<String, Set<String>> attributes) {
this.attributes = attributes;
}
public boolean attributeExists(String attributeName) {
return attributes.containsKey(attributeName);
}
public Set<String> values(String attributeName) {
return attributes.get(attributeName);
}
public String getSingleValue(String attributeName) {
return values(attributeName).iterator().next();
}
}

And the test becomes much simpler.
Item item = mock(Item.class);
when(item.getSingleValue("the-key")).thenReturn("the single value");

We are (almost) hiding totally the implementation of attributes from other classes.
The client classes are not aware of the implementation expect two cases:

  1. Item still knows how attributes are built.
  2. The class that creates Item (whichever it is), also knows the implementation of attributes.

The two points above mean that if we change the implementation of Attributes (something else than a map), at least two other classes will need to be change. This is a great example for High Coupling.

The Next Step Improvement
The solution above will sometimes (usually?) be enough.
As pragmatic programmers, we need to know when to stop.
However, let’s see how we can even improve the first solution.

Create a class Attributes:

public class Attributes {
private final Map<String, Set<String>> attributes;
public Attributes() {
this.attributes = new HashMap<>();
}
public boolean attributeExists(String attributeName) {
return attributes.containsKey(attributeName);
}
public Set<String> values(String attributeName) {
return attributes.get(attributeName);
}
public String getSingleValue(String attributeName) {
return values(attributeName).iterator().next();
}
public Attributes addAttribute(String attributeName, Collection<String> values) {
this.attributes.put(attributeName, new HashSet<>(values));
return this;
}
}
view raw Attributes.java hosted with ❤ by GitHub

And the Item that uses it:
public class Item {
private final Attributes attributes;
public Item(Attributes attributes) {
this.attributes = attributes;
}
public boolean attributeExists(String attributeName) {
return attributes.attributeExists(attributeName);
}
public Set<String> values(String attributeName) {
return attributes.values(attributeName);
}
public String getSingleValue(String attributeName) {
return attributes.getSingleValue(attributeName);
}
}

(Did you noticed? The implementation of attributes inside item was changed, but the test did not need to. This is thanks to the small change of delegation.)

In the second solution we improved the encapsulation of Attributes.
Now even Item does not know how it works.
We can change the implementation of Attributes without touching any other class.
We can make different implementations of Attributes:
– An implementation that holds a Set of values (as in the example).
– An implementation that holds a List of values.
– A totally different data structure that we can think of.

As long as all of our tests pass, we can be sure that everything is OK.

What did we get?

  • The code is much more maintainable.
  • Tests are simpler and more maintainable.
  • It is much more flexible. We can change implementation of Attributes (map, set, list, whatever we choose).
  • Changes in Attribute does not affect any other part of the code. Not even those who directly uses it.
  • Modularization and code reuse. We can use Attributes class in other places in the code.

The Single Responsibility Principle

Introduction
In this post I would like to cover the Single Responsibility Principle (SRP).
I think that this is the basis of any clean and well designed system.

What is SRP?
The term was introduced by Robert C. Martin.
It is the ‘S’ from the SOLID principles, which are the basis for OOD.
http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)
Here’s the PDF paper for SRP by Robert C. Martin https://docs.google.com/file/d/0ByOwmqah_nuGNHEtcU5OekdDMkk/

From Wikipedia:

…In object-oriented programming, the single responsibility principle states that every class should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility….

From Clean Code:

A class or module should have one, and only one, reason to change.

So if a class (or module) needs to be modified for more than one reason, it does more than one thing. I.e. has more than one responsibility.

Why SRP?

  • Organize the code
    Let’s imagine a car mechanic who owns a repair shop.
    He has many many tools to work with. The tools are divided into types; Pliers, Screw-Drivers (Phillips / Blade), Hammers, Wrenches (Tubing / Hex) and many more.

    How would it be easier to organize the tools?
    Few drawers with different types in each one of them?
    Or, many small drawers, each containing a specific type?

    Now, imagine the drawer as the module. This is why many small modules (classes) are more organized then few large ones.

  • Less fragile
    When a class has more than one reason to be changed, it is more fragile.
    A change in one location might lead to some unexpected behavior in totally other places.
  • Low Coupling
    More responsibilities lead to higher coupling.
    The couplings are the responsibilities.
    Higher coupling leads to more dependencies, which is harder to maintain.
  • Code Changes
    Refactoring is much easier for a single responsibility module.
    If you want to get the shotgun effect, let your classes have more responsibilities.
  • Maintainability
    It’s obvious that it is much easier to maintain a small single purpose class, then a big monolithic one.
  • Testability
    A test class for a ‘one purpose class’ will have less test cases (branches).
    If a class has one purpose it will usually have less dependencies, thus less mocking and test preparing.
    The “self documentation by tests” becomes much clearer.
  • Easier Debugging
    Since I started doing TDD and test-first approach, I hardly debug. Really.
    But, there come times when I must debug in order to understand what’s going on.
    In a single responsibility class, finding the bug or the cause of the problem, becomes a much easier task.

What needs to have single responsibility?
Each part of the system.

  • The methods
  • The classes
  • The packages
  • The modules

How to Recognize a Break of the SRP?

  • Class Has Too Many Dependencies
    A constructor with too many input parameters implies many dependencies (hopefully you do inject dependencies).

    Another way too see many dependencies is by the test class.
    If you need to mock too many objects, it usually means breaking the SRP.

  • Method Has Too Many Parameters
    Same as the class’s smell. Think of the method’s parameters as dependencies.
  • The Test Class Becomes Too Complicated
    If the test has too many variants, it might suggest that the class has too many responsibilities.
    It might suggest that some methods do too much.
  • Class / Method is Long
    If a method is long, it might suggest it does too much.
    Same goes for a class.
    My rule of thumb is that a class should not exceed 200-250 LOC. Imports included 😉
  • Descriptive Naming
    If you need to describe what your class / method / package is using with the AND world, it probably breaks the SRP.
  • Class With Low Cohesion
    Cohesion is an important topic of its own and should have its own post.
    But Cohesion and SRP are closely related and it is important to mention it here.
    In general, if a class (or module) is not cohesive, it probably breaks the SRP.

    A hint for a non-cohesive class:
    The class has two fields. One field is used by some methods. The other field is used by the other methods.

  • Change In One Place Breaks Another
    If a change in the code to add a new feature or simply refactor broke a test which seems unrelated, it might suggest a breaking the SRP.
  • Shotgun Effect
    If a small change makes a big ripple in your code. If you need to change many locations it might suggest, among other smells, that the SRP is broken.
  • Unable to Encapsulate a Module
    I will explain using Spring, but the concept is important (not the implementation).
    Suppose you use the @Configuration or XML configuration.
    If you can’t encapsulate the beans in that configuration, it should give you a hint of too much responsibility.
    The Configuration should hide any inner bean and expose minimal interfaces.
    If you need to change the Configuration due to more than one reason, then, well, you know…

How to make the design compliant with the Single Responsibility Principle
The suggestions below can apply to other topics of the SOLID principles.
They are also good for any Clean Code suggestion.
But here they are aimed for the Single Responsibility Principle.

  • Awareness
    This is a general suggestion for clean code.
    We need to be aware of our code. We need to take care.
    As for SRP, we need to try and catch as early as we can a class that is responsible for too much.
    We need to always look for a ‘too big method’.
  • Testable Code
    Write your code in a way that everything can be tested.
    Then, you will surly want that your tests be simple and descriptive.
  • TDD
    (I am not going to add anything here)
  • Code Coverage Metrics
    Sometimes, when a class does too much, it won’t have 100% coverage at first shot.
    Check the code quality metrics.
  • Refactoring and Design Patterns
    For SRP, we’ll mostly do extract-method, extract-class, move-method.
    We’ll use composition and strategy instead of conditionals.
  • Clear Modularization of the System
    When using a DI injector (Spring), I think that Configuration class (or XML) can pinpoint the modules design. And modules’ single responsibility.
    I prefer to have several small to medium size of configuration files (XML or Java) than having one big file / class.
    It helps see the responsibility of the module and easier to maintain.
    I think that the configuration approach of injection has an advantage of annotation approach. Simply because the Configuration approach put the modules in the spotlight.

Conclusion
As I mentioned in the beginning of this post, I think that Single-Responsibility-Principle is the basis of a good design.
If you have this principle in your mind while designing and developing, you will have a simpler more readable code.
Better design will be followed.

One Final Note
As always, one needs to be careful on how to apply practices, code and design.
Sometimes we might do over-work and make simple things over complex.
So a common sense must be applied at any refactor and change.