Dropwizard, MongoDB and Gradle Experimenting

Introduction

I created a small project using Dropwizard, MongoDB and Gradle.
It actually started as an experimenting Guava cache as buffer for sending counters to MongoDB (or any other DB).
I wanted to try Gradle with MongoDB plugin as well.
Next, I wanted to create some kind of interface to check this framework and I decided to try out DropWizard.
And this is how this project was created.

This post is not a tutorial of using any of the chosen technologies.
It is a small showcase, which I did as an experimentation.
I guess there are some flaws and maybe I am not using all “best practices”.
However, I do believe that the project, with the help of this post, can be a good starting point for the different technologies I used.
I also tried to show some design choices, which help achieving SRP, decoupling, cohesion etc.

I decided to begin the post with the use-case description and how I implemented it.
After that, I will explain what I did with Gradle, MongoDB (and embedded) and Dropwizard.

Before I begin, here’s the source code:
https://github.com/eyalgo/CountersBuffering

The Use-Case: Counters With Buffer

We have some input requests into our servers.
During the process of a request, we choose to “paint” it with some data (decided by some logic).
Some requests will be painted by Value-1, some by Value-2, etc. Some will not be painted at all.
We want to limit the number of painted requests (per paint value).
In order to have limit, for each paint-value, we know the maximum, but also need to count (per paint value) the number of painted requests.
As the system has several servers, the counters should be shared by all servers.

The latency is crucial. Normally we get 4-5 milliseconds per request processing (for all the flow. Not just the painting).
So we don’t want that increasing the counters will increase the latency.
Instead, we’ll keep a buffer, the client will send ‘increase’ to the buffer.
The buffer will periodically increase the repository with “bulk incremental”.

I know it is possible to use directly Hazelcast or Couchbase or some other similar fast in-memory DB.
But for our use-case, that was the best solution.

The principle is simple:

  • The dependent module will call a service to increase a counter for some key
  • The implementation keeps a buffer of counters per key
  • It is thread safe
  • The writing happens in a separate thread
  • Each write will do a bulk increase
Counters High Level Design

Counters High Level Design

Buffer

For the buffer, I used Google Guava cache.

Buffer Structure

private final LoadingCache<Counterable, BufferValue> cache;
...

this.cache = CacheBuilder.newBuilder()
	.maximumSize(bufferConfiguration.getMaximumSize())
	.expireAfterWrite(bufferConfiguration.getExpireAfterWriteInSec(), TimeUnit.SECONDS)
	.expireAfterAccess(bufferConfiguration.getExpireAfterAccessInSec(), TimeUnit.SECONDS)
	.removalListener((notification) -> increaseCounter(notification))
	.build(new BufferValueCacheLoader());
...

(Counterable is described below)

BufferValueCacheLoader implements the interface CacheLoader.
When we call increase (see below), we first get from the cache by key.
If the key does not exist, the loader returns value.

public class BufferValueCacheLoader extends CacheLoader<Counterable, BufferValue> {
	@Override
	public BufferValue load(Counterable key) {
		return new BufferValue();
	}
}

BufferValue wraps an AtomicInteger (I would need to change it to Long at some point)

Increase the Counter

public void increase(Counterable key) {
	BufferValue meter = cache.getUnchecked(key);
	int currentValue = meter.increment();
	if (currentValue > threashold) {
		if (meter.compareAndSet(currentValue, currentValue - threashold)) {
			increaseCounter(key, threashold);
		}
	}
}

When increasing a counter, we first get current value from cache (with the help of the loader. As descried above).
The compareAndSet will atomically check if has same value (not modified by another thread).
If so, it will update the value and return true.
If success (returned true), the the buffer calls the updater.

View the buffer

After developing the service, I wanted a way to view the buffer.
So I implemented the following method, which is used by the front-end layer (Dropwizard’s resource).
Small example of Java 8 Stream and Lambda expression.

return ImmutableMap.copyOf(cache.asMap())
	.entrySet().stream()
	.collect(
		Collectors.toMap((entry) -> entry.getKey().toString(),
		(entry) -> entry.getValue().getValue()));

MongoDB

I chose MongoDB because of two reasons:

  1. We have similar implementation in our system, which we decided to use MongoDB there as well.
  2. Easy to use with embedded server.

I tried to design the system so it’s possible to choose any other persist implementation and change it.

I used morphia as the MongoDB client layer instead of using directly the Java client.
With Morphia you create a dao, which is the connection to a MongoDB collection.
You also declare a simple Java Bean (POJO), that represent a document in a collection.
Once you have the dao, you can do operations on the collection the “Java way”, with fairly easy API.
You can have queries and any other CRUD operations, and more.

I had two operations: increasing counter and getting all counters.
The services implementations do not extend Morphia’s BasicDAO, but instead have a class that inherits it.
I used composition (over inheritance) because I wanted to have more behavior for both services.

In order to be consistent with the key representation, and to hide the way it is implemented from the dependent code, I used an interface: Counterable with a single method: counterKey().

public interface Counterable {
	String counterKey();
}
final class MongoCountersDao extends BasicDAO<Counter, ObjectId> {
	MongoCountersDao(Datastore ds) {
		super(Counter.class, ds);
	}
}

Increasing the Counter

@Override
protected void increaseCounter(String key, int value) {
	Query<Counter> query = dao.createQuery();
	query.criteria("id").equal(key);
	UpdateOperations<Counter> ops = dao.getDs().createUpdateOperations(Counter.class).inc("count", value);
	dao.getDs().update(query, ops, true);
}

Embedded MongoDB

In order to run tests on the persistence layer, I wanted to use an in-memory database.
There’s a MongoDB plugin for that.
With this plugin you can run a server by just creating it on runtime, or run as goal in maven / task in Gradle.
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo
https://github.com/sourcemuse/GradleMongoPlugin

Embedded MongoDB on Gradle

I will elaborate more on Gradle later, but here’s what I needed to do in order to set the embedded mongo.

dependencies {
	// More dependencies here
	testCompile 'com.sourcemuse.gradle.plugin:gradle-mongo-plugin:0.4.0'
}

Setup Properties

mongo {
	//	logFilePath: The desired log file path (defaults to 'embedded-mongo.log')
	logging 'console'
	mongoVersion 'PRODUCTION'
	port 12345
	//	storageLocation: The directory location from where embedded Mongo will run, such as /tmp/storage (defaults to a java temp directory)
}

Embedded MongoDB Gradle Tasks

startMongoDb will just start the server. It will run until stopping it.
stopMongoDb will stop it.
startManagedMongoDb test , two tasks, which will start the embedded server before the tests run. The server will shut down when the jvm finishes (the tests finish)

Gradle

https://gradle.org/
Although I only touch the tip of the iceberg, I started seeing the strength of Gradle.
It wasn’t even that hard setting up the project.

Gradle Setup

First, I created a Gradle project in eclipse (after installing the plugin).
I needed to setup the dependencies. Very simple. Just like maven.

One Big JAR Output

When I want to create one big jar from all libraries in Maven, I use the shade plugin.
I was looking for something similar, and found gradle-one-jar pluging.
https://github.com/rholder/gradle-one-jar
I added that plugin
apply plugin: 'gradle-one-jar'
Added one-jar to classpath:

buildscript {
	repositories { mavenCentral() }
	dependencies {
		classpath 'com.sourcemuse.gradle.plugin:gradle-mongo-plugin:0.4.0'
		classpath 'com.github.rholder:gradle-one-jar:1.0.4'
	}
}

And added a task:

mainClassName = 'org.eyalgo.server.dropwizard.CountersBufferApplication'
task oneJar(type: OneJar) {
	mainClass = mainClassName
	archiveName = 'counters.jar'
	mergeManifestFromJar = true
}

Those were the necessary actions I needed to do in order to make the application run.

Dropwizard

Dropwizard is a stack of libraries that makes it easy to create web servers quickly.
It uses Jetty for HTTP and Jersey for REST. It has other mature libraries to create complicated services.
It can be used as an easy developed microservice.

As I explained in the introduction, I will not cover all of Dropwizard features and/or setup.
There are plenty of sites for that.
I will briefly cover the actions I did in order to make the application run.

Gradle Run Task

run { args 'server', './src/main/resources/config/counters.yml' }
First argument is server. Second argument is the location of the configuration file.
If you don’t give Dropwizard the first argument, you will get a nice error message of the possible options.

positional arguments:
  {server,check}         available commands

I already showed how to create one jar in the Gradle section.

Configuration

In Dropwizard, you setup the application using a class that extends Configuration.
The fields in the class should align to the properties in the yml configuration file.

It is a good practice to put the properties in groups, based on their usage/responsibility.
For example, I created a group for mongo parameters.

In order for the configuration class to read the sub groups correctly, you need to create a class that align to the properties in the group.
Then, in the main configuration, add this class as a member and mark it with annotation: @JsonProperty.
Example:

@JsonProperty("mongo")
private MongoServicesFactory servicesFactory = new MongoServicesFactory();
@JsonProperty("buffer")
private BufferConfiguration bufferConfiguration = new BufferConfiguration();

Example: Changing the Ports

Here’s part of the configuration file that sets the ports for the application.

server:
  adminMinThreads: 1
  adminMaxThreads: 64
  applicationConnectors:
    - type: http
      port: 9090
  adminConnectors:
    - type: http
      port: 9091

Health Check

Dropwizard gives basic admin API out of the box. I changed the port to 9091.
I created a health check for MongoDB connection.
You need to extend HealthCheck and implement check method.

private final MongoClient mongo;
...
protected Result check() throws Exception {
	try {
		mongo.getDatabaseNames();
		return Result.healthy();
	} catch (Exception e) {
		return Result.unhealthy("Cannot connect to " + mongo.getAllAddress());
	}
}

Other feature are pretty much self-explanatory or simple as any getting started tutorial.

Ideas for Enhancement

The are some things I may try to add.

  • Add tests to the Dropwizard section.
    This project started as PoC, so I, unlike usually, skipped the tests in the server part.
    Dropwizard has Testing Dropwizard, which I want to try.
  • Different persistence implementation. (couchbase? Hazelcast?).
  • Injection using Google Guice. And with help of that, inject different persistence implementation.

That’s all.
Hope that helps.

Source code: https://github.com/eyalgo/CountersBuffering

Linkedin Twitter facebook github

Advertisement

JUnit Rules

Introduction
In this post I would like to show an example of how to use JUnit Rule to make testing easier.

Recently I inherited a rather complex system, which not everything is tested. And even the tested code is complex.
Mostly I see lack of test isolation.
(I will write a different blog about working with Legacy Code).

One of the test (and code) I am fixing actually tests several components together.
It also connect to the DB. It tests some logic and intersection between components.
When the code did not compile in a totally different location, the test could not run because it loaded all Spring context.
The structure was that before testing (any class) all Spring context was initiated.
The tests extend BaseTest, which loads all Spring context.

BaseTest also cleans the DB in the @After method.

Important note: This article is about changing tests, which are not structured entirely correct.
When creating new code and tests they should be isolated, testi one thing etc.
Better tests should use mock DB / dependencies etc.
After I fix the test and refactor, I’ll have confidence making more changes.

Back to our topic…
So, what I got is slow run of the test suit, no isolation and even problem running tests due to unrelated problems.

So I decided separating the context loading with DB connection and both of them from the cleaning up of the database.

Approach
In order to achieve that I did three things:
The first was to change inheritance of the test class.
It stopped inheriting BaseTest.
Instead it inherits AbstractJUnit4SpringContextTests
Now I can create my own context per test and not load everything.

Now I needed two rules, a @ClassRule and @Rule
@ClassRule will be responsible for DB connection
@Rule will cleanup the DB after / before each test

But first, what are JUnit Rules?
A short explanation would be that they provide a possibility to intercept test method, similar to AOP concept.
@Rule allows us to intercept method before and after the actual run of the method.
@ClassRule intercepts test class run.
A very known @Rule is JUnit’s TemporaryFolder.

(Similar to @Before, @After and @BeforeClass).

Creating @Rule
The easy part was to create a Rule that cleanup the DB before and after a test method.
You need to implement TestRule, which has one method: Statement apply(Statement base, Description description);
You can do a-lot with it.
I found out that usually I will have an inner class that extends Statement.
The rule I created did not create the DB connection, but got it in the constructor.

Here’s the full code:

public class DbCleanupRule implements TestRule {
private final DbConnectionManager connection;
public DbCleanupRule(DbConnectionManager connection) {
this.connection = connection;
}
@Override
public Statement apply(Statement base, Description description) {
return new DbCleanupStatement(base, connection);
}
private static final class DbCleanupStatement extends Statement {
private final Statement base;
private final DbConnectionManager connection;
private DbCleanupStatement(Statement base, DbConnectionManager connection) {
this.base = base;
this.connection = connection;
}
@Override
public void evaluate() throws Throwable {
try {
cleanDb();
base.evaluate();
} finally {
cleanDb();
}
}
private void cleanDb() {
connection.doTheCleanup();
}
}
}

Creating @ClassRule
ClassRule is actually also TestRule.
The only difference from Rule is how we use it in our test code.
I’ll show it below.

The challenge in creating this rule was that I wanted to use Spring context to get the correct connection.
Here’s the code:
(ExternalResource is TestRule)

public class DbConnectionRule extends ExternalResource {
private DbConnectionManager connection;
public DbConnectionRule() {
}
@Override
protected void before() throws Throwable {
ClassPathXmlApplicationContext ctx = null;
try {
ctx = new ClassPathXmlApplicationContext("/META-INF/my-db-connection-TEST-ctx.xml");
mongoDb = (DbConnectionManager) ctx.getBean("myDbConnection");
} finally {
if (ctx != null) {
ctx.close();
}
}
}
@Override
protected void after() {
}
public DbConnectionManager getDbConnecttion() {
return connection;
}
}

(Did you see that I could make DbCleanupRule inherit ExternalResource?)

Using it
The last part is how we use the rules.
A @Rule must be public field.
A @ClassRule must be public static field.

And there it is:

@ContextConfiguration(locations = { "/META-INF/one-dao-TEST-ctx.xml", "/META-INF/two-TEST-ctx.xml" })
public class ExampleDaoTest extends AbstractJUnit4SpringContextTests {
@ClassRule
public static DbCleanupRule connectionRule = new DbCleanupRule ();
@Rule
public DbCleanupRule dbCleanupRule = new DbCleanupRule(connectionRule.getDbConnecttion());
@Autowired
private ExampleDao classToTest;
@Test
public void foo() {
}
}

That’s all.
Hope it helps.

Eyal

[Edit]
I got some good remarks from Logan Mzz at DZone: http://java.dzone.com/articles/junit-rules#comment-125673

  1. Link to Junit Rules: https://github.com/junit-team/junit/wiki/Rules
  2. There’s ErrorCollector rule, which avoids annoying test-fail-fix cycles for a single test.
  3. And RuleChain, which described in the comment

Linkedin Twitter facebook github

Agile Mindset During Programming

I’m Stuck

Recently I found myself in several situations where I just couldn’t write code. Or at least, “good code”
First, I had “writer’s block”. I just could not see what was going to be my next test to write.
I could not find the name for the class / interface I needed.
Second, I just couldn’t simplify my code. Each time I tried to change something (class / method) to a simpler construction, things got worse. Sometimes to break.

I was stuck.

The Tasks

Refactor to Patterns

One of the situation we had was to refactor a certain piece in the code.
This piece of code is the manual wiring part. We use DI pattern in ALL of our system, but due to some technical constraints, we must do the injection by hand. We can live with that.
So the refactor in the wiring part would have given us a nice option to change some of the implementation during boot.
Some of the concrete classes should be different than others based on some flags.
The design patterns we understood we would need were: Factory Method and Abstract Factory
The last remark is important to understand why I had those difficulties.
I will get to it later.

New Module

Another task was to create a new module that gets some input items, extract data from them, send it to a service, parse the response, modify the data accordingly and returns items with modified data.
While talking about it with a peer, we understood we needed several classes.
As always we wanted to have high quality code by using the known OOD principles wherever we could apply them.

So What Went Wrong?

In the case of refactoring the wiring part, I constantly tried to immediately create the end result of the abstract factory and the factory method that would call it.
There are a-lot of details in that wiring code. Some are common and some needed to be separated by the factory.
I just couldn’t find the correct places to extract to methods and then to other class.
Each time I had to move code from one location and dependency to another.
I couldn’t tell what exactly the factory’s signature and methods would be.

In the case of the new module, I knew that I want several classes. Each has one responsibility. I knew I want some level of abstraction and good encapsulation.
So I kept trying to create this great encapsulated abstract data structure. And the code kept being extremely complicated.
Important note: I always to test first approach.
Each time I tried to create a test for a certain behavior, it was really really complicated.

I stopped

Went to have a cup of coffey.
I went to read some unrelated stuff.
And I talked to one of my peers.
We both understood what we needed to do.
I went home…

And then it hit me

The problem I had was that I knew were I needed to go, but instead of taking small steps, I kept trying to take one big leap at once.
Which brings me to the analogy of Agile to good programming habits (and TDD would be one of them).

Agile and Programming Analogy

One of the advantages in Agile development that I really like is the small steps (iteration) we do in order to reach our goal.
Check the two pictures below.
One shows how we aim towards a far away goal and probably miss.
The other shows how we divide to iterations and aim incrementally.

Aiming From Far

Aiming From Far


Aiming Iterative and Incremental

Aiming Iterative and Incremental

Develop in Small Incremental Iterations

This is the moral of the story.
Even if you know exactly how the structure of the classes should look like.
Even if you know exactly which design pattern to use.
Even if you know what to do.
Even if you know exactly how the end result should look like.

Keep on using the methods and practices that brings you to the goal in the safest and fastest way.
Do small steps.
Test each step.
Increment the functionality of the code in small chucks.
TDD.
Pair.
Keep calm.

Refactor Big Leap

Refactor Big Leap


Refactor Small Steps

Refactor Small Steps


Request Validation and Filtering by Flags – Redesign and Refactoring

General
In the previous posts I started describing a validation / filtering framework we’re building.
While showing the code, I am trying to show clean code, test orientation and code evolution.
It has some agility in the process; We know the end requirements, but the exact details are evolving over time.

During the development we have changed the code to be more general as we saw some patterns in it.
The code evolved as the flow evolved as well.

The flow as we now understand it
Here’s a diagram of the flow we’ll implement

Request Sequence

Request Sequence

The Pattern
At each step of the sequence (validation, filtering, action), we recognized the same pattern:

  1. We have specific implementations (filters, validations)
  2. We have an engine that wraps up the specific implementations
  3. We need to map the implementations by flag, and upon request’s flags, select the appropriate implementations.
  4. We need to have a class that calls the mapper and then the engine

A diagram showing the pattern

The Pattern

The Pattern

Source Code
In order to show some of the evolution of the code, and how refactoring changed it, I added tags in GitHub after major changes.

Code Examples
Let’s see what came up from the mapper pattern.

public interface MapperByFlag<T> {
  List<T> getOperations(Request request);
}
public abstract class AbstractMapperByFlag<T> implements MapperByFlag<T> {
  private List<T> defaultOperations;
  private Map<String, List<T>> mapOfOperations;

  public AbstractMapperByFlag(List<T> defaultOperations, Map<String, List<T>> mapOfOperations) {
    this.defaultOperations = defaultOperations;
    this.mapOfOperations = mapOfOperations;
  }

  @Override
  public final List<T> getOperations(Request request) {
    Set<T> selectedFilters = Sets.newHashSet(defaultOperations);
    Set<String> flags = request.getFlags();
    for (String flag : flags) {
      if (mapOfOperations.containsKey(flag)) {
        selectedFilters.addAll(mapOfOperations.get(flag));
      }
    }
    return Lists.newArrayList(selectedFilters);
  }
}
  public RequestValidationByFlagMapper(List<RequestValidation> defaultValidations,
    map<String, List<RequestValidation>> mapOfValidations) {
    super(defaultValidations, mapOfValidations);
  }

  public ItemFiltersByFlagMapper(List<Filter> defaultFilters, Map<String, List<Filter>> mapOfFilters) {
    super(defaultFilters, mapOfFilters);
  }

I created a test for the abstract class, to show the flow itself.
The tests of the implementations use Java Reflection to verify that the correct injected parameters are sent to the super.
I am showing the imports here as well. To have some reference for the static imports, mockito and hamcrest packages and classes.

import static org.hamcrest.Matchers.containsInAnyOrder;
import static org.junit.Assert.assertThat;
import static org.mockito.Mockito.when;

import java.util.List;
import java.util.Map;

import org.eyal.requestvalidation.model.Request;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.runners.MockitoJUnitRunner;

import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Lists;
import com.google.common.collect.Sets;

@RunWith(MockitoJUnitRunner.class)
public class AbstractMapperByFlagTest {
	private final static String FLAG_1 = "flag 1";
	private final static String FLAG_2 = "flag 2";

	@Mock
	private Request request;

	private String defaultOperation1 = "defaultOperation1";
	private String defaultOperation2 = "defaultOperation2";
	private String mapOperation11 = "mapOperation11";
	private String mapOperation12 = "mapOperation12";
	private String mapOperation23 = "mapOperation23";

	private MapperByFlag<String> mapper;

	@Before
	public void setup() {
		List<String> defaults = Lists.newArrayList(defaultOperation1, defaultOperation2);
		Map<String, List<String>> mapped = ImmutableMap.<String, List<String>> builder()
		        .put(FLAG_1, Lists.newArrayList(mapOperation11, mapOperation12))
		        .put(FLAG_2, Lists.newArrayList(mapOperation23, mapOperation11)).build();
		mapper = new AbstractMapperByFlag<String>(defaults, mapped) {
		};
	}

	@Test
	public void whenRequestDoesNotHaveFlagsShouldReturnDefaultFiltersOnly() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet());

		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(defaultOperation1, defaultOperation2));
	}

	@Test
	public void whenRequestHasFlagsNotInMappingShouldReturnDefaultFiltersOnly() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet("un-mapped-flag"));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(defaultOperation1, defaultOperation2));
	}
	
	@Test
	public void whenRequestHasOneFlagShouldReturnWithDefaultAndMappedFilters() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet(FLAG_1));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(mapOperation12, defaultOperation1, mapOperation11, defaultOperation2));
	}
	
	@Test
	public void whenRequestHasTwoFlagsShouldReturnWithDefaultAndMappedFiltersWithoutDuplications() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet(FLAG_1, FLAG_2));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(mapOperation12, defaultOperation1, mapOperation11, defaultOperation2, mapOperation23));
	}
}
@RunWith(MockitoJUnitRunner.class)
public class RequestValidationByFlagMapperTest {

	@Mock
	private List<RequestValidation> defaultValidations;
    
	@Mock
	private Map<String, List<RequestValidation>> mapOfValidations;

	@InjectMocks
	private RequestValidationByFlagMapper mapper;

	@SuppressWarnings("unchecked")
    @Test
	public void verifyParameters() throws NoSuchFieldException, SecurityException, IllegalArgumentException,
	        IllegalAccessException {
		Field defaultOperationsField = AbstractMapperByFlag.class.getDeclaredField("defaultOperations");
		defaultOperationsField.setAccessible(true);
        List<RequestValidation> actualFilters = (List<RequestValidation>) defaultOperationsField.get(mapper);
		assertThat(actualFilters, sameInstance(defaultValidations));

		Field mapOfFiltersField = AbstractMapperByFlag.class.getDeclaredField("mapOfOperations");
		mapOfFiltersField.setAccessible(true);
		Map<String, List<RequestValidation>> actualMapOfFilters = (Map<String, List<RequestValidation>>) mapOfFiltersField.get(mapper);
		assertThat(actualMapOfFilters, sameInstance(mapOfValidations));
	}
}

To Do
There are other classes that might be candidate for refactoring of some sort.
RequestFlowValidation and RequestFilter are similar.
And
RequestValidationsEngineImpl and FiltersEngine

To Do 2
Create a Matcher for the reflection part.

Code
As always, all the code can be found at:

A Tag for this post: all-components-in

Conclusion
The infrastructure is almost done.
During this time we are also implementing actual classes for the flow (validations, filters, actions).
These are not covered in the posts, nor in GitHub.
The infrastructure will be wired to a service we have using Spring.
This will be explained in future posts.

Request Validation and Filtering by Flags – Filtering an Item

On a previous post, I introduced a system requirement of validating and filtering a request by setting flags on it.

Reference: Introduction

In this post I want to show the filtering system.

Here are general UML diagrams of the filtering components and sequence.

Filtering UML Diagram

General Components

public interface Item {
        String getName();
}
public interface Request {
        Set getFlags();
        List getItems();
}

Filter Mechanism (as described in the UML above)

public interface Filter extends Predicate {
	String errorMessage();
}

FilterEngine is a cool part, which takes several Filters and apply to each the items. Below you can see the code of it. Above, the sequence diagram shows how it’s done.

public class FiltersEngine {

	public FiltersEngine() {
	}

	public ItemsFilterResponse applyFilters(List filters, List items) {
		List validItems = Lists.newLinkedList(items);
		List invalidItemInformations = Lists.newLinkedList();
		for (Filter validator : filters) {
			ItemsFilterResponse responseFromFilter = responseFromFilter(validItems, validator);
			validItems = responseFromFilter.getValidItems();
			invalidItemInformations.addAll(responseFromFilter.getInvalidItemsInformations());
		}

		return new ItemsFilterResponse(validItems, invalidItemInformations);
	}

	private ItemsFilterResponse responseFromFilter(List items, Filter filter) {
		List validItems = Lists.newLinkedList();
		List invalidItemInformations = Lists.newLinkedList();
		for (Item item : items) {
			if (filter.apply(item)) {
				validItems.add(item);
			} else {
				invalidItemInformations.add(new InvalidItemInformation(item, filter.errorMessage()));
			}
		}
		return new ItemsFilterResponse(validItems, invalidItemInformations);
	}
}

And of course, we need to test it:

@RunWith(MockitoJUnitRunner.class)
public class FiltersEngineTest {
	private final static String MESSAGE_FOR_FILTER_1 = "FILTER - 1 - ERROR";
	private final static String MESSAGE_FOR_Filter_2 = "FILTER - 2 - ERROR";
	@Mock(name = "filter 1")
	private Filter singleFilter1;
	@Mock(name = "filter 2")
	private Filter singleFilter2;
	@Mock(name = "item 1")
	private Item item1;
	@Mock(name = "item 2")
	private Item item2;

	@InjectMocks
	private FiltersEngine filtersEngine;

	@Before
	public void setup() {
		when(singleFilter1.errorMessage()).thenReturn(MESSAGE_FOR_FILTER_1);
		when(singleFilter2.errorMessage()).thenReturn(MESSAGE_FOR_Filter_2);

		when(item1.getName()).thenReturn("name1");

		when(item2.getName()).thenReturn("name2");
	}

	@Test
	public void verifyThatAllSingleFiltersAreCalledForValidItems() {
		when(singleFilter1.apply(item1)).thenReturn(true);
		when(singleFilter1.apply(item2)).thenReturn(true);
		when(singleFilter2.apply(item1)).thenReturn(true);
		when(singleFilter2.apply(item2)).thenReturn(true);

		ItemsFilterResponse response = filtersEngine.applyFilters(Lists.newArrayList(singleFilter1, singleFilter2),
				Lists.newArrayList(item1, item2));
		assertThat("expected no invalid", response.getInvalidItemsInformations(),
				emptyCollectionOf(InvalidItemInformation.class));
		assertThat(response.getValidItems(), containsInAnyOrder(item1, item2));

		verify(singleFilter1).apply(item1);
		verify(singleFilter1).apply(item2);
		verify(singleFilter2).apply(item1);
		verify(singleFilter2).apply(item2);
		verifyNoMoreInteractions(singleFilter1, singleFilter2);
	}

	@SuppressWarnings("unchecked")
	@Test
	public void itemsFailIndifferentFiltersShouldGetOnlyFailures() {
		when(singleFilter1.apply(item1)).thenReturn(false);
		when(singleFilter1.apply(item2)).thenReturn(true);
		when(singleFilter2.apply(item2)).thenReturn(false);

		ItemsFilterResponse response = filtersEngine.applyFilters(Lists.newArrayList(singleFilter1, singleFilter2),
				Lists.newArrayList(item1, item2));
		assertThat(
				response.getInvalidItemsInformations(),
				containsInAnyOrder(matchInvalidInformation(new InvalidItemInformation(item1, MESSAGE_FOR_FILTER_1)),
						matchInvalidInformation(new InvalidItemInformation(item2, MESSAGE_FOR_Filter_2))));
		assertThat(response.getValidItems(), emptyCollectionOf(Item.class));

		verify(singleFilter1).apply(item1);
		verify(singleFilter1).apply(item2);
		verify(singleFilter1).errorMessage();
		verify(singleFilter2).apply(item2);
		verify(singleFilter2).errorMessage();
		verifyNoMoreInteractions(singleFilter1, singleFilter2);
	}

	@Test
	public void firstItemFailSecondItemSuccessShouldGetOneItemInEachList() {
		when(singleFilter1.apply(item1)).thenReturn(true);
		when(singleFilter1.apply(item2)).thenReturn(true);
		when(singleFilter2.apply(item1)).thenReturn(false);
		when(singleFilter2.apply(item2)).thenReturn(true);

		ItemsFilterResponse response = filtersEngine.applyFilters(Lists.newArrayList(singleFilter1, singleFilter2),
				Lists.newArrayList(item1, item2));
		assertThat(response.getInvalidItemsInformations(), contains(matchInvalidInformation(new InvalidItemInformation(item1,
				MESSAGE_FOR_Filter_2))));
		assertThat(response.getValidItems(), containsInAnyOrder(item2));

		verify(singleFilter1).apply(item1);
		verify(singleFilter1).apply(item2);
		verify(singleFilter2).apply(item1);
		verify(singleFilter2).apply(item2);
		verify(singleFilter2).errorMessage();
		verifyNoMoreInteractions(singleFilter1, singleFilter2);
	}

	private static BaseMatcher matchInvalidInformation(InvalidItemInformation expected) {
		return new InvalidItemInformationMatcher(expected);
	}

	private final static class InvalidItemInformationMatcher extends BaseMatcher {
		private InvalidItemInformation expected;

		private InvalidItemInformationMatcher(InvalidItemInformation expected) {
			this.expected = expected;
		}

		public boolean matches(Object itemInformation) {
			InvalidItemInformation actual = (InvalidItemInformation) itemInformation;
			return actual.getName().equals(expected.getName())
					&& actual.getErrorMessage().equals(expected.getErrorMessage());
		}

		public void describeTo(Description description) {
		}
	}
}

Some explanation about the test
You can see that I don’t care about the implementation of Filter. Actually, I don’t even have any implementation of it.
I also don’t have implementation of the Item nor the request.
You can see an example of how to create a BaseMatcher to be used with assertThat(…)

Coding
Try to see whether it is ‘clean’. Can you understand the story of the code? Can you tell what the code does by reading it line by line?

On the following post I will show how I applied the flag mapping to select the correct filters for a request.

You can find all the code in: https://github.com/eyalgo/request-validation

[Edit] Created tag Filtering_an_item before refactoring.

Recommended Books

I have a list of books, which I highly recommend.
Each book taught me something different.

It all begun years ago, when I went into interviewing process for my second work place.
I was a junior Java developer, a coder. I didn’t have much experience and more importantly, I did not have a mentor or someone who would direct me. I learned on my own, after a CS Java course. Java 1.4 just came.

One of my first interviewers was a great mentor. We met for an hour (probably). I don’t remember the company.  I don’t remember the job position. I don’t remember his name.
But I DO remember a few things he asked me.
He asked me if I know what TDD was. He asked me about XP.
He also recommended a book: Effective Java by Joshua Bloch

He didn’t even know what a great gift he gave me.

So I went on and bought Effective Java, 1st edition. And TDD by Kent Beck.
That was my first step towards being craftsman.

Effective Java and Refactoring
These two books look as they are not entirely related.
However, both of these books thought me a-lot about design and patterns.
I started to understand how to write code using patterns (Refactoring), and how to do it in Java (Effective).
These books gave me the grounds for best practice in Java and Design Patterns and OOD.

Test Driven Development
I can’t say enough about this book.
At first, I really didn’t understand what it was all about.
But it was part of XP !! (which I didn’t understand as well).
The TDD was left on the shelf until I was ready for it.

Clean Code and The Pragmatic Programmer
Should I say more?
If you haven’t read both, stop everything and go to read.
They are MUST for anyone who wants to be craftsman and takes his / her profession seriously.
These books are also lots of fun to read. Especially the Pragmatic book.

The Clean Coder
If you want to take the next step of being a professional, read it.
I was sometimes frustrated while reading it. I thought to myself how can pass all of this material to my teammates…

Dependency Injection
Somewhat not related, but as I see it, if you don’t use DI, you can’t write clean, testable code.
If you can’t write clean, testable code, you are missing the point of craftsmanship.
The book covers some injectors frameworks, but also describe what is it all about.

Below is a table with the books I have mentioned.

One last remark,
This list does not contain the only books I read.
During the years I have read more technical / professional books, but these made the most difference for me.

Name Author(s) ISBN
Effective Java Joshua Bloch 978-032-135-668-0
Test-Driven Development Kent Beck 978-032-114-653-3
Refactoring Martin Fowler 978-020-148-567-7
Dependency Injection Dhanji R. Prasanna 978-193-398-855-9
Clean Code Robert C. Martin 978-013-235-088-4
The Clean Coder Robert C. Martin 978-013-708-107-3
The Pragmatic Programmer Andrew Hunt , David Thomas 978-020-161-622-4