dropwizard-jobs – My First Open Source Contribution

I am very exited today.
Today I did an actual contribution to the open source community.
I helped publishing Java libraries to maven central.

The library we published is a plugin for dropwizard that uses quartz:
https://github.com/spinscale/dropwizard-jobs

You can check it out. The README explains how to use it.
In this post I will not explain the plugin, but I will share my contribution experience to an open source project.

Why Even Contribute

There are so many reasons. Google is full of them.
I did it because I really wanted to help the community (In this case, the originators of the code).
It improves my skill-sets. I know now more than I knew before.
Exposed to technologies and processes which I usually don’t use.
Part of my digital signature and branding.

Why This Project

I know about dropwizard for more than a year.
I didn’t have the chance to use it at work.
I did some experiments with dropwizard to get the filling of it.

In one of my POCs, I wanted to create a scheduling mechanism in the micro-service I created.
By searching Google, I found this project.
First of all, I liked what it does and how.
I also liked the explanation (how to use it). It’s clear and I could work with it immediately.
I think the developers did a good job.

How It (my contribution) All Started

But one thing was missing. It wasn’t in maven repository (central or any other public repository).
So I asked whether the developers plan to publish it.

Issue #10 in the repository shows my question and the beginning of the conversation.
Issue 10, question from 2015/02/24

Basically the problem was the time to spend in order to comply requirements. The code itself was working.

My Contribution

I took upon myself to publish it to public maven repository.
I have never done something like that, so I wasn’t sure what to do.
I thought of using bintray by JFrog.
Eventually I decided to use sonatype. It felt more comfortable. So I started reading about OSSRH (Open Source Project Repository Hosting).
There’s an explanation for that below.

I forked the code to my GitHub account and used pull requests in order to merge the code I pushed.
I mostly modified the pom files so comply Sonatype requirements as explained in the tutorials.

Once we were all set, I did the actual publishing.
And now it’s there. Everyone can use it.

At first I was extra careful with any change. After all, “it’s not my code”…
Over time, I felt more comfortable modifying and pull requesting.

How To Upload to Sonatype

I used the tutorials, which explain clearly what to do.
http://central.sonatype.org/pages/ossrh-guide.html

  1. Create a user at OSSRH
  2. Open an issue with links to GitHub. Group ID and artifact ID
  3. Follow instructions (In our case, I had to modify the maven’s groups ID)
  4. Add the correct plugins to the pom file
    maven
    pgp – read it carefully
  5. Deploy

Contributors

Linkedin Twitter facebook github

Dropwizard, MongoDB and Gradle Experimenting

Introduction

I created a small project using Dropwizard, MongoDB and Gradle.
It actually started as an experimenting Guava cache as buffer for sending counters to MongoDB (or any other DB).
I wanted to try Gradle with MongoDB plugin as well.
Next, I wanted to create some kind of interface to check this framework and I decided to try out DropWizard.
And this is how this project was created.

This post is not a tutorial of using any of the chosen technologies.
It is a small showcase, which I did as an experimentation.
I guess there are some flaws and maybe I am not using all “best practices”.
However, I do believe that the project, with the help of this post, can be a good starting point for the different technologies I used.
I also tried to show some design choices, which help achieving SRP, decoupling, cohesion etc.

I decided to begin the post with the use-case description and how I implemented it.
After that, I will explain what I did with Gradle, MongoDB (and embedded) and Dropwizard.

Before I begin, here’s the source code:
https://github.com/eyalgo/CountersBuffering

The Use-Case: Counters With Buffer

We have some input requests into our servers.
During the process of a request, we choose to “paint” it with some data (decided by some logic).
Some requests will be painted by Value-1, some by Value-2, etc. Some will not be painted at all.
We want to limit the number of painted requests (per paint value).
In order to have limit, for each paint-value, we know the maximum, but also need to count (per paint value) the number of painted requests.
As the system has several servers, the counters should be shared by all servers.

The latency is crucial. Normally we get 4-5 milliseconds per request processing (for all the flow. Not just the painting).
So we don’t want that increasing the counters will increase the latency.
Instead, we’ll keep a buffer, the client will send ‘increase’ to the buffer.
The buffer will periodically increase the repository with “bulk incremental”.

I know it is possible to use directly Hazelcast or Couchbase or some other similar fast in-memory DB.
But for our use-case, that was the best solution.

The principle is simple:

  • The dependent module will call a service to increase a counter for some key
  • The implementation keeps a buffer of counters per key
  • It is thread safe
  • The writing happens in a separate thread
  • Each write will do a bulk increase
Counters High Level Design

Counters High Level Design

Buffer

For the buffer, I used Google Guava cache.

Buffer Structure

private final LoadingCache<Counterable, BufferValue> cache;
...

this.cache = CacheBuilder.newBuilder()
	.maximumSize(bufferConfiguration.getMaximumSize())
	.expireAfterWrite(bufferConfiguration.getExpireAfterWriteInSec(), TimeUnit.SECONDS)
	.expireAfterAccess(bufferConfiguration.getExpireAfterAccessInSec(), TimeUnit.SECONDS)
	.removalListener((notification) -> increaseCounter(notification))
	.build(new BufferValueCacheLoader());
...

(Counterable is described below)

BufferValueCacheLoader implements the interface CacheLoader.
When we call increase (see below), we first get from the cache by key.
If the key does not exist, the loader returns value.

public class BufferValueCacheLoader extends CacheLoader<Counterable, BufferValue> {
	@Override
	public BufferValue load(Counterable key) {
		return new BufferValue();
	}
}

BufferValue wraps an AtomicInteger (I would need to change it to Long at some point)

Increase the Counter

public void increase(Counterable key) {
	BufferValue meter = cache.getUnchecked(key);
	int currentValue = meter.increment();
	if (currentValue > threashold) {
		if (meter.compareAndSet(currentValue, currentValue - threashold)) {
			increaseCounter(key, threashold);
		}
	}
}

When increasing a counter, we first get current value from cache (with the help of the loader. As descried above).
The compareAndSet will atomically check if has same value (not modified by another thread).
If so, it will update the value and return true.
If success (returned true), the the buffer calls the updater.

View the buffer

After developing the service, I wanted a way to view the buffer.
So I implemented the following method, which is used by the front-end layer (Dropwizard’s resource).
Small example of Java 8 Stream and Lambda expression.

return ImmutableMap.copyOf(cache.asMap())
	.entrySet().stream()
	.collect(
		Collectors.toMap((entry) -> entry.getKey().toString(),
		(entry) -> entry.getValue().getValue()));

MongoDB

I chose MongoDB because of two reasons:

  1. We have similar implementation in our system, which we decided to use MongoDB there as well.
  2. Easy to use with embedded server.

I tried to design the system so it’s possible to choose any other persist implementation and change it.

I used morphia as the MongoDB client layer instead of using directly the Java client.
With Morphia you create a dao, which is the connection to a MongoDB collection.
You also declare a simple Java Bean (POJO), that represent a document in a collection.
Once you have the dao, you can do operations on the collection the “Java way”, with fairly easy API.
You can have queries and any other CRUD operations, and more.

I had two operations: increasing counter and getting all counters.
The services implementations do not extend Morphia’s BasicDAO, but instead have a class that inherits it.
I used composition (over inheritance) because I wanted to have more behavior for both services.

In order to be consistent with the key representation, and to hide the way it is implemented from the dependent code, I used an interface: Counterable with a single method: counterKey().

public interface Counterable {
	String counterKey();
}
final class MongoCountersDao extends BasicDAO<Counter, ObjectId> {
	MongoCountersDao(Datastore ds) {
		super(Counter.class, ds);
	}
}

Increasing the Counter

@Override
protected void increaseCounter(String key, int value) {
	Query<Counter> query = dao.createQuery();
	query.criteria("id").equal(key);
	UpdateOperations<Counter> ops = dao.getDs().createUpdateOperations(Counter.class).inc("count", value);
	dao.getDs().update(query, ops, true);
}

Embedded MongoDB

In order to run tests on the persistence layer, I wanted to use an in-memory database.
There’s a MongoDB plugin for that.
With this plugin you can run a server by just creating it on runtime, or run as goal in maven / task in Gradle.
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo
https://github.com/sourcemuse/GradleMongoPlugin

Embedded MongoDB on Gradle

I will elaborate more on Gradle later, but here’s what I needed to do in order to set the embedded mongo.

dependencies {
	// More dependencies here
	testCompile 'com.sourcemuse.gradle.plugin:gradle-mongo-plugin:0.4.0'
}

Setup Properties

mongo {
	//	logFilePath: The desired log file path (defaults to 'embedded-mongo.log')
	logging 'console'
	mongoVersion 'PRODUCTION'
	port 12345
	//	storageLocation: The directory location from where embedded Mongo will run, such as /tmp/storage (defaults to a java temp directory)
}

Embedded MongoDB Gradle Tasks

startMongoDb will just start the server. It will run until stopping it.
stopMongoDb will stop it.
startManagedMongoDb test , two tasks, which will start the embedded server before the tests run. The server will shut down when the jvm finishes (the tests finish)

Gradle

https://gradle.org/
Although I only touch the tip of the iceberg, I started seeing the strength of Gradle.
It wasn’t even that hard setting up the project.

Gradle Setup

First, I created a Gradle project in eclipse (after installing the plugin).
I needed to setup the dependencies. Very simple. Just like maven.

One Big JAR Output

When I want to create one big jar from all libraries in Maven, I use the shade plugin.
I was looking for something similar, and found gradle-one-jar pluging.
https://github.com/rholder/gradle-one-jar
I added that plugin
apply plugin: 'gradle-one-jar'
Added one-jar to classpath:

buildscript {
	repositories { mavenCentral() }
	dependencies {
		classpath 'com.sourcemuse.gradle.plugin:gradle-mongo-plugin:0.4.0'
		classpath 'com.github.rholder:gradle-one-jar:1.0.4'
	}
}

And added a task:

mainClassName = 'org.eyalgo.server.dropwizard.CountersBufferApplication'
task oneJar(type: OneJar) {
	mainClass = mainClassName
	archiveName = 'counters.jar'
	mergeManifestFromJar = true
}

Those were the necessary actions I needed to do in order to make the application run.

Dropwizard

Dropwizard is a stack of libraries that makes it easy to create web servers quickly.
It uses Jetty for HTTP and Jersey for REST. It has other mature libraries to create complicated services.
It can be used as an easy developed microservice.

As I explained in the introduction, I will not cover all of Dropwizard features and/or setup.
There are plenty of sites for that.
I will briefly cover the actions I did in order to make the application run.

Gradle Run Task

run { args 'server', './src/main/resources/config/counters.yml' }
First argument is server. Second argument is the location of the configuration file.
If you don’t give Dropwizard the first argument, you will get a nice error message of the possible options.

positional arguments:
  {server,check}         available commands

I already showed how to create one jar in the Gradle section.

Configuration

In Dropwizard, you setup the application using a class that extends Configuration.
The fields in the class should align to the properties in the yml configuration file.

It is a good practice to put the properties in groups, based on their usage/responsibility.
For example, I created a group for mongo parameters.

In order for the configuration class to read the sub groups correctly, you need to create a class that align to the properties in the group.
Then, in the main configuration, add this class as a member and mark it with annotation: @JsonProperty.
Example:

@JsonProperty("mongo")
private MongoServicesFactory servicesFactory = new MongoServicesFactory();
@JsonProperty("buffer")
private BufferConfiguration bufferConfiguration = new BufferConfiguration();

Example: Changing the Ports

Here’s part of the configuration file that sets the ports for the application.

server:
  adminMinThreads: 1
  adminMaxThreads: 64
  applicationConnectors:
    - type: http
      port: 9090
  adminConnectors:
    - type: http
      port: 9091

Health Check

Dropwizard gives basic admin API out of the box. I changed the port to 9091.
I created a health check for MongoDB connection.
You need to extend HealthCheck and implement check method.

private final MongoClient mongo;
...
protected Result check() throws Exception {
	try {
		mongo.getDatabaseNames();
		return Result.healthy();
	} catch (Exception e) {
		return Result.unhealthy("Cannot connect to " + mongo.getAllAddress());
	}
}

Other feature are pretty much self-explanatory or simple as any getting started tutorial.

Ideas for Enhancement

The are some things I may try to add.

  • Add tests to the Dropwizard section.
    This project started as PoC, so I, unlike usually, skipped the tests in the server part.
    Dropwizard has Testing Dropwizard, which I want to try.
  • Different persistence implementation. (couchbase? Hazelcast?).
  • Injection using Google Guice. And with help of that, inject different persistence implementation.

That’s all.
Hope that helps.

Source code: https://github.com/eyalgo/CountersBuffering

Linkedin Twitter facebook github

Java 8 Stream and Lambda Expressions – Parsing File Example

Recently I wanted to extract certain data from an output log.
Here’s part of the log file:

2015-01-06 11:33:03 b.s.d.task [INFO] Emitting: eVentToRequestsBolt __ack_ack [-6722594615019711369 -1335723027906100557]
2015-01-06 11:33:03 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package com.foo.bar
2015-01-06 11:33:04 b.s.d.executor [INFO] Processing received message source: eventToManageBolt:2, stream: __ack_ack, id: {}, [-6722594615019711369 -1335723027906100557]
2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package co.il.boo
2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package dot.org.biz

I decided to do it using the Java8 Stream and Lambda Expression features.

Read the file
First, I needed to read the log file and put the lines in a Stream:

Stream<String> lines = Files.lines(Paths.get(args[1]));

Filter relevant lines
I needed to get the packages names and write them into another file.
Not all lines contained the data I need, hence filter only relevant ones.

lines.filter(line -> line.contains("===---> Loaded package"))

Parsing the relevant lines
Then, I needed to parse the relevant lines.
I did it by first splitting each line to an array of Strings and then taking the last element in that array.
In other words, I did a double mapping. First a line to an array and then an array to a String.

.map(line -> line.split(" "))
.map(arr -> arr[arr.length - 1])

Writing to output file
The last part was taking each string and write it to a file. That was the terminal operation.

.forEach(packageName -> writeToFile(fw, packageName));

writeToFile is a method I created.
The reason is that Java File System throws IOException. You can’t use checked exceptions in lambda expressions.

Here’s a full example (note, I don’t check input)

import java.io.FileWriter;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Stream;

public class App {
	public static void main(String[] args) throws IOException {
		Stream<String> lines = null;
		if (args.length == 2) {
			lines = Files.lines(Paths.get(args[1]));
		} else {
			String s1 = "2015-01-06 11:33:03 b.s.d.task [INFO] Emitting: adEventToRequestsBolt __ack_ack [-6722594615019711369 -1335723027906100557]";
			String s2 = "2015-01-06 11:33:03 b.s.d.executor [INFO] Processing received message source: eventToManageBolt:2, stream: __ack_ack, id: {}, [-6722594615019711369 -1335723027906100557]";
			String s3 = "2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package com.foo.bar";
			String s4 = "2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package co.il.boo";
			String s5 = "2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package dot.org.biz";
			List<String> rows = Arrays.asList(s1, s2, s3, s4, s5);
			lines = rows.stream();
		}
		
		new App().parse(lines, args[0]);

	}
	
	private void parse(Stream<String> lines, String output) throws IOException {
		final FileWriter fw = new FileWriter(output);
		
		//@formatter:off
		lines.filter(line -> line.contains("===---> Loaded package"))
		.map(line -> line.split(" "))
		.map(arr -> arr[arr.length - 1])
		.forEach(packageName-> writeToFile(fw, packageName));
		//@formatter:on
		fw.close();
		lines.close();
	}

	private void writeToFile(FileWriter fw, String packageName) {
		try {
			fw.write(String.format("%s%n", packageName));
		} catch (IOException e) {
			throw new RuntimeException(e);
		}
	}

}

(You can find more Java 8 features tutorial at: Java Code Geek – Java 8 Features Tutorial )
Linkedin Twitter facebook github

Playing With Java Concurrency

Recently I needed to transform some filet that each has a list (array) of objects in JSON format to files that each has separated lines of the same data (objects).

It was a one time task and simple one.
I did the reading and writing using some feature of Java nio.
I used GSON in the simplest way.
One thread runs over the files, converts and writes.

The whole operation finished in a few seconds.

However, I wanted to play a little bit with concurrency.
So I enhanced the tool to work concurrently:

Threads
Runnable for reading file.
The reader threads are submitted to ExecutorService.
The output, which is a list of objects (User in the example), will be put in a BlockingQueue.

Runnable for writing file.
Each runnable will poll from the blocking queue.
It will write lines of data to a file.
I don’t add the writer Runnable to the ExecutorService, but instead just start a thread with it.
The runnable has a while(some boolen is true) {...} pattern.
More about that below…

Synchronizing Everything
BlockingQueue is the interface of both types of threads.

As the writer runnable runs in a while loop (consumer), I wanted to be able to make it stop so the tool will terminate.
So I used two objects for that:

Semaphore
The loop that reads the input files increments a counter.
Once I finished traversing the input files and submitted the writers, I initialized a semaphore in the main thread:
semaphore.acquire(numberOfFiles);

In each reader runable, I released the semaphore:
semaphore.release();

AtomicBoolean
The while loop of the writers uses an AtomicBoolean.
As long as AtomicBoolean==true, the writer will continue.

In the main thread, just after the acquire of the semaphore, I set the AtomicBoolean to false.
This enables the writer threads to terminate.

Using Java NIO
In order to scan, read and write the file system, I used some features of Java NIO.

Scanning: Files.newDirectoryStream(inputFilesDirectory, "*.json");
Deleting output directory before starting: Files.walkFileTree...
BufferedReader and BufferedWriter: Files.newBufferedReader(filePath); Files.newBufferedWriter(fileOutputPath, Charset.defaultCharset());

One note. In order to generate random files for this example, I used apache commons lang: RandomStringUtils.randomAlphabetic
All code in GitHub.

public class JsonArrayToJsonLines {
	private final static Path inputFilesDirectory = Paths.get("src\\main\\resources\\files");
	private final static Path outputDirectory = Paths
			.get("src\\main\\resources\\files\\output");
	private final static Gson gson = new Gson();
	
	private final BlockingQueue<EntitiesData> entitiesQueue = new LinkedBlockingQueue<>();
	
	private AtomicBoolean stillWorking = new AtomicBoolean(true);
	private Semaphore semaphore = new Semaphore(0);
	int numberOfFiles = 0;

	private JsonArrayToJsonLines() {
	}

	public static void main(String[] args) throws IOException, InterruptedException {
		new JsonArrayToJsonLines().process();
	}

	private void process() throws IOException, InterruptedException {
		deleteFilesInOutputDir();
		final ExecutorService executorService = createExecutorService();
		DirectoryStream<Path> directoryStream = Files.newDirectoryStream(inputFilesDirectory, "*.json");
		
		for (int i = 0; i < 2; i++) {
			new Thread(new JsonElementsFileWriter(stillWorking, semaphore, entitiesQueue)).start();
		}

		directoryStream.forEach(new Consumer<Path>() {
			@Override
			public void accept(Path filePath) {
				numberOfFiles++;
				executorService.submit(new OriginalFileReader(filePath, entitiesQueue));
			}
		});
		
		semaphore.acquire(numberOfFiles);
		stillWorking.set(false);
		shutDownExecutor(executorService);
	}

	private void deleteFilesInOutputDir() throws IOException {
		Files.walkFileTree(outputDirectory, new SimpleFileVisitor<Path>() {
			@Override
			public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
				Files.delete(file);
				return FileVisitResult.CONTINUE;
			}
		});
	}

	private ExecutorService createExecutorService() {
		int numberOfCpus = Runtime.getRuntime().availableProcessors();
		return Executors.newFixedThreadPool(numberOfCpus);
	}

	private void shutDownExecutor(final ExecutorService executorService) {
		executorService.shutdown();
		try {
			if (!executorService.awaitTermination(120, TimeUnit.SECONDS)) {
				executorService.shutdownNow();
			}

			if (!executorService.awaitTermination(120, TimeUnit.SECONDS)) {
			}
		} catch (InterruptedException ex) {
			executorService.shutdownNow();
			Thread.currentThread().interrupt();
		}
	}


	private static final class OriginalFileReader implements Runnable {
		private final Path filePath;
		private final BlockingQueue<EntitiesData> entitiesQueue;

		private OriginalFileReader(Path filePath, BlockingQueue<EntitiesData> entitiesQueue) {
			this.filePath = filePath;
			this.entitiesQueue = entitiesQueue;
		}

		@Override
		public void run() {
			Path fileName = filePath.getFileName();
			try {
				BufferedReader br = Files.newBufferedReader(filePath);
				User[] entities = gson.fromJson(br, User[].class);
				System.out.println("---> " + fileName);
				entitiesQueue.put(new EntitiesData(fileName.toString(), entities));
			} catch (IOException | InterruptedException e) {
				throw new RuntimeException(filePath.toString(), e);
			}
		}
	}

	private static final class JsonElementsFileWriter implements Runnable {
		private final BlockingQueue<EntitiesData> entitiesQueue;
		private final AtomicBoolean stillWorking;
		private final Semaphore semaphore;

		private JsonElementsFileWriter(AtomicBoolean stillWorking, Semaphore semaphore,
				BlockingQueue<EntitiesData> entitiesQueue) {
			this.stillWorking = stillWorking;
			this.semaphore = semaphore;
			this.entitiesQueue = entitiesQueue;
		}

		@Override
		public void run() {
			while (stillWorking.get()) {
				try {
					EntitiesData data = entitiesQueue.poll(100, TimeUnit.MILLISECONDS);
					if (data != null) {
						try {
							String fileOutput = outputDirectory.toString() + File.separator + data.fileName;
							Path fileOutputPath = Paths.get(fileOutput);
							BufferedWriter writer = Files.newBufferedWriter(fileOutputPath, Charset.defaultCharset());
							for (User user : data.entities) {
								writer.append(gson.toJson(user));
								writer.newLine();
							}
							writer.flush();
							System.out.println("=======================================>>>>> " + data.fileName);
						} catch (IOException e) {
							throw new RuntimeException(data.fileName, e);
						} finally {
							semaphore.release();
						}
					}
				} catch (InterruptedException e1) {
				}
			}
		}
	}

	private static final class EntitiesData {
		private final String fileName;
		private final User[] entities;

		private EntitiesData(String fileName, User[] entities) {
			this.fileName = fileName;
			this.entities = entities;
		}
	}
}

Linkedin Twitter facebook github

JUnit Rules

Introduction
In this post I would like to show an example of how to use JUnit Rule to make testing easier.

Recently I inherited a rather complex system, which not everything is tested. And even the tested code is complex.
Mostly I see lack of test isolation.
(I will write a different blog about working with Legacy Code).

One of the test (and code) I am fixing actually tests several components together.
It also connect to the DB. It tests some logic and intersection between components.
When the code did not compile in a totally different location, the test could not run because it loaded all Spring context.
The structure was that before testing (any class) all Spring context was initiated.
The tests extend BaseTest, which loads all Spring context.

BaseTest also cleans the DB in the @After method.

Important note: This article is about changing tests, which are not structured entirely correct.
When creating new code and tests they should be isolated, testi one thing etc.
Better tests should use mock DB / dependencies etc.
After I fix the test and refactor, I’ll have confidence making more changes.

Back to our topic…
So, what I got is slow run of the test suit, no isolation and even problem running tests due to unrelated problems.

So I decided separating the context loading with DB connection and both of them from the cleaning up of the database.

Approach
In order to achieve that I did three things:
The first was to change inheritance of the test class.
It stopped inheriting BaseTest.
Instead it inherits AbstractJUnit4SpringContextTests
Now I can create my own context per test and not load everything.

Now I needed two rules, a @ClassRule and @Rule
@ClassRule will be responsible for DB connection
@Rule will cleanup the DB after / before each test

But first, what are JUnit Rules?
A short explanation would be that they provide a possibility to intercept test method, similar to AOP concept.
@Rule allows us to intercept method before and after the actual run of the method.
@ClassRule intercepts test class run.
A very known @Rule is JUnit’s TemporaryFolder.

(Similar to @Before, @After and @BeforeClass).

Creating @Rule
The easy part was to create a Rule that cleanup the DB before and after a test method.
You need to implement TestRule, which has one method: Statement apply(Statement base, Description description);
You can do a-lot with it.
I found out that usually I will have an inner class that extends Statement.
The rule I created did not create the DB connection, but got it in the constructor.

Here’s the full code:

public class DbCleanupRule implements TestRule {
private final DbConnectionManager connection;
public DbCleanupRule(DbConnectionManager connection) {
this.connection = connection;
}
@Override
public Statement apply(Statement base, Description description) {
return new DbCleanupStatement(base, connection);
}
private static final class DbCleanupStatement extends Statement {
private final Statement base;
private final DbConnectionManager connection;
private DbCleanupStatement(Statement base, DbConnectionManager connection) {
this.base = base;
this.connection = connection;
}
@Override
public void evaluate() throws Throwable {
try {
cleanDb();
base.evaluate();
} finally {
cleanDb();
}
}
private void cleanDb() {
connection.doTheCleanup();
}
}
}

Creating @ClassRule
ClassRule is actually also TestRule.
The only difference from Rule is how we use it in our test code.
I’ll show it below.

The challenge in creating this rule was that I wanted to use Spring context to get the correct connection.
Here’s the code:
(ExternalResource is TestRule)

public class DbConnectionRule extends ExternalResource {
private DbConnectionManager connection;
public DbConnectionRule() {
}
@Override
protected void before() throws Throwable {
ClassPathXmlApplicationContext ctx = null;
try {
ctx = new ClassPathXmlApplicationContext("/META-INF/my-db-connection-TEST-ctx.xml");
mongoDb = (DbConnectionManager) ctx.getBean("myDbConnection");
} finally {
if (ctx != null) {
ctx.close();
}
}
}
@Override
protected void after() {
}
public DbConnectionManager getDbConnecttion() {
return connection;
}
}

(Did you see that I could make DbCleanupRule inherit ExternalResource?)

Using it
The last part is how we use the rules.
A @Rule must be public field.
A @ClassRule must be public static field.

And there it is:

@ContextConfiguration(locations = { "/META-INF/one-dao-TEST-ctx.xml", "/META-INF/two-TEST-ctx.xml" })
public class ExampleDaoTest extends AbstractJUnit4SpringContextTests {
@ClassRule
public static DbCleanupRule connectionRule = new DbCleanupRule ();
@Rule
public DbCleanupRule dbCleanupRule = new DbCleanupRule(connectionRule.getDbConnecttion());
@Autowired
private ExampleDao classToTest;
@Test
public void foo() {
}
}

That’s all.
Hope it helps.

Eyal

[Edit]
I got some good remarks from Logan Mzz at DZone: http://java.dzone.com/articles/junit-rules#comment-125673

  1. Link to Junit Rules: https://github.com/junit-team/junit/wiki/Rules
  2. There’s ErrorCollector rule, which avoids annoying test-fail-fix cycles for a single test.
  3. And RuleChain, which described in the comment

Linkedin Twitter facebook github

RSS Reader Using: ROME, Spring MVC, Embedded Jetty

In this post I will show some guidlines to create a Spring web application, running it using Jetty and using an external library called ROME for RSS reading.

General

I have recently created a sample web application that acts as an RSS reader.
I wanted to examine ROME for RSS reading.
I also wanted to create the application using Spring container and MVC for the simplest view.
For rapid development, I used Jetty as the server, using a simple java class for it.
All the code can be found at GitHub, eyalgo/rss-reader.

Content

  1. Maven Dependencies
  2. Jetty Server
  3. Spring Dependency
  4. Spring MVC
  5. ROME

Maven Dependencies

At first, I could not get the correct Jetty version to use.
There is one with group-id mortby, and another by eclipse.
After some careful examination and trial and error, I took the eclipse’s library.
Spring is just standard.
I found ROME with newest version under GutHub. It’s still a SNAPSHOT.

Here’s the list of the dependencies:

  • Spring
  • jetty
  • rome and rome-fetcher
  • logback and slf4j
  • For Testing
    • Junit
    • mockito
    • hamcrest
    • spring-test

The project’s pom file can be found at: https://github.com/eyalgo/rss-reader/blob/master/pom.xml

Jetty Server

A few years ago I’ve been working using Wicket framework and got to know Jetty, and its easy usage for creating a server.
I decided to go in that direction and to skip the standard web server running with WAR deployment.

There are several ways to create the Jetty server.
I decided to create the server, using a web application context.

First, create the context:

private WebAppContext createContext() {
  WebAppContext webAppContext = new WebAppContext();
  webAppContext.setContextPath("/");
  webAppContext.setWar(WEB_APP_ROOT);
  return webAppContext;
}

Then, create the server and add the context as handler:

  Server server = new Server(port);
  server.setHandler(webAppContext);

Finally, start the server:

  try {
    server.start();
  } catch (Exception e) {
    LOGGER.error("Failed to start server", e);
    throw new RuntimeException();
  }

Everything is under https://github.com/eyalgo/rss-reader/tree/master/src/test/java/com/eyalgo/rssreader/server

Spring Project Structure

RSS Reader Project Structure

RSS Reader Project Structure

Spring Dependency

In web.xml I am declaring application-context.xml and web-context.xml .
In web-context.xml , I am telling Spring were to scan for components:
<context:component-scan base-package="com.eyalgo.rssreader"/>
In application-context.xml I am adding a bean, which is an external class and therefore I can’t scan it (use annotations):
<bean id="fetcher" class="org.rometools.fetcher.impl.HttpURLFeedFetcher"/>

Besides scanning, I am adding the correct annotation in the correct classes.
@Repository
@Service
@Controller

@Autowired

Spring MVC

In order to have some basic view of the RSS feeds (and atoms), I used a simple MVC and JSP pages.
To create a controller, I needed to add @Controller for the class.
I added @RequestMapping("/rss") so all requests should be prefixed with rss.

Each method has a @RequestMapping declaration. I decided that everything is GET.

Adding a Parameter to the Request

Just add @RequestParam("feedUrl") before the parameter of the method.

Redirecting a Request

After adding an RSS location, I wanted to redirect the answer to show all current RSS items.
So the method for adding an RSS feed needed to return a String.
The returned value is: “redirect:all”.

  @RequestMapping(value = "feed", method = RequestMethod.GET)
  public String addFeed(@RequestParam("feedUrl") String feedUrl) {
    feedReciever.addFeed(feedUrl);
    return "redirect:all";
  }

Return a ModelAndView Class

In Spring MVC, when a method returns a String, the framework looks for a JSP page with that name.
If there is none, then we’ll get an error.
(If you want to return just the String, you can add @ResponseBody to the method.)

In order to use ModelAndView, you need to create one with a name:
ModelAndView modelAndView = new ModelAndView("rssItems");
The name will tell Spring MVC which JSP to refer to.
In this example, it will look for rssItems.jsp.

Then you can add to the ModelAndView “objects”:

  List<FeedItem> items = itemsRetriever.get();
  ModelAndView modelAndView = new ModelAndView("rssItems");
  modelAndView.addObject("items", items);

In the JSP page, you need to refer the names of the objects you added.
And then, you can access their properties.
So in this example, we’ll have the following in rssItems.jsp:

  <c:forEach items="${items}" var="item">
    <div>
      <a href="${item.link}" target="_blank">${item.title}</a><br>
        ${item.publishedDate}
    </div>
  </c:forEach>

Note
Spring “knows” to add jsp as a suffix to the ModelAndView name because I declared it in web-context.xml.
In the bean of class: org.springframework.web.servlet.view.InternalResourceViewResolver.
By setting the prefix this bean also tells Spring were to look for the jsp pages.
Please look:
https://github.com/eyalgo/rss-reader/blob/master/src/main/java/com/eyalgo/rssreader/web/RssController.java
https://github.com/eyalgo/rss-reader/blob/master/src/main/webapp/WEB-INF/views/rssItems.jsp

Error Handling

There are several ways to handle errors in Spring MVC.
I chose a generic way, in which for any error, a general error page will be shown.

First, add @ControllerAdvice to the class you want to handle errors.

Second, create a method per type of exception you want to catch.
You need to annotate the method with @ExceptionHandler. The parameter tells which exception this method will handle.

You can have a method for IllegalArgumentException and another for different exception and so on.

The return value can be anything and it will act as normal controller. That means, having a jsp (for example) with the name of the object the method returns.

In this example, the method catches all exception and activates error.jsp, adding the message to the page.

  @ExceptionHandler(Exception.class)
  public ModelAndView handleAllException(Exception e) {
    ModelAndView model = new ModelAndView("error");
    model.addObject("message", e.getMessage());
    return model;
  }

ROME

ROME is an easy to use library for handling RSS feeds.
https://github.com/rometools/rome
rome-fetcher is an additional library that helps getting (fetching) RSS feeds from external sources, such as HTTP, or URL.
https://github.com/rometools/rome-fetcher

As of now, the latest build is 2.0.0-SNAPSHOT

An example on how to read an input RSS XML file can be found at:
https://github.com/eyalgo/rss-reader/blob/master/src/test/java/com/eyalgo/rssreader/runners/MetadataFeedRunner.java

To make life easier, I used rome-fetcher.
It gives you the ability to give a URL (RSS feed) and have all the SyndFeed out of it.

If you want, you can add caching, so it won’t download cached items (items that were already downloaded).
All you need is to create the fetcher with FeedFetcherCache parameter in the constructor.

Usage:

  @Override
  public List<FeedItem> extractItems(String feedUrl) {
    try {
      List<FeedItem> result = Lists.newLinkedList();
      URL url = new URL(feedUrl);
      SyndFeed feed = fetcher.retrieveFeed(url);
      List<SyndEntry> entries = feed.getEntries();
      for (SyndEntry entry : entries) {
        result.add(new FeedItem(entry.getTitle(), entry.getLink(), entry.getPublishedDate()));
      }
      return result;
    } catch (IllegalArgumentException | IOException | FeedException | FetcherException e) {
      throw new RuntimeException("Error getting feed from " + feedUrl, e);
    }
}

https://github.com/eyalgo/rss-reader/blob/master/src/main/java/com/eyalgo/rssreader/service/rome/RomeItemsExtractor.java

Note
If you get a warning message (looks as System.out) that tells that fetcher.properties is missing, just add an empty file under resources (or in the root of the classpath).

Summary

This post covered several topics.
You can also have a look at the way a lot of the code is tested.
Check Matchers and mocks.

If you have any remarks, please drop a note.

Eyal

Linkedin Twitter facebook github

Using Reflection for Testing

I am working on a presentation about the ‘Single Responsibility Principle’, based on my previous post.
It take most of my time.

In the meantime, I want to share a sample code of how I use to test inner fields in my classes.
I am doing it for a special case of testing, which is more of an integration test.
In the standard unit testing of the dependent class, I am using mocks of the dependencies.

The Facts

  1. All of the fields (and dependencies in our classes are private
  2. The class do not have getters for its dependencies
  3. We wire things up using Spring (XML context)
  4. I wan to verify that dependency interface A is wired correctly to dependent class B

One approach would be to wire everything and then run some kind of integration test of the logic.
I don’t want to do this. It will make the test hard to maintain.

The other approach is to check wiring directly.
And for that I am using reflection.

Below is a sample code of the testing method, and the usage.
Notice how I am catching the exception and throws a RuntimeException in case there is a problem.
This way, I have cleaner tested code.


// Somewhere in a different utility class for testing
@SuppressWarnings("unchecked")
public static <T> T realObjectFromField(Class<?> clazz, String fieldName, Object object) {
Field declaredField = accessibleField(clazz, fieldName);
try {
return (T) declaredField.get(object);
} catch (IllegalArgumentException | IllegalAccessException e) {
throw new RuntimeException(e);
}
}
private static Field accessibleField(Class<?> clazz, String fieldName) {
try {
Field declaredField = clazz.getDeclaredField(fieldName);
declaredField.setAccessible(true);
return declaredField;
} catch (NoSuchFieldException | SecurityException e) {
throw new RuntimeException(e);
}
}
// This is how we use it in a test method
import static mypackage.ReflectionUtils.realObjectFromField;
ItemFiltersMapperByFlag mapper = realObjectFromField(ItemsFilterExecutor.class, "filtersMapper", filterExecutor);
assertNotNull("mapper is null. Check wiring", mapper);

Spring Context with Properties, Collections and Maps

In this post I want to show how I added the XML context file to the Spring application.
The second aspect I will show will be the usage of the properties file for the external constants values.

All of the code is located at: https://github.com/eyalgo/request-validation (as previous posts).

I decided to do all the wiring using XML file and not annotation for several reasons:

  1. I am simulating a situation were the framework is not part of the codebase (it’s an external library) and it is not annotated by anything
  2. I want to emphasize the modularity of the system using several XML files (yes. I know it can be done using @Configuration)
  3. Although I know Spring, I still feel more comfortable having more control using the XML files
  4. For Spring newbies, I think they should start using XML configuration files and only when grasp the idea and technology, should start using annotation

About the modularization and how the sample app is constructed, I will expand in later post.

Let’s start with the properties file:
And here’s part of the properties file:

flag.external = EXTERNAL
flag.internal = INTERNAL
flag.even = EVEN
flag.odd = ODD

validation.acceptedIds=flow1,flow2,flow3,flow4,flow5

filter.external.name.max = 10
filter.external.name.min = 4

filter.internal.name.max = 6
filter.internal.name.min = 2

Properties File Location
We also need to tell Spring the location of our property file.
You can use PropertyPlaceholderConfigurer , or you can use the context element as shown here:

<context:property-placeholder location="classpath:spring/flow.properties" />

Simple Bean Example
This is the very basic example of how to create a bean

<bean id="evenIdFilter"
  class="org.eyal.requestvalidation.flow.example.flow.itemsfilter.filters.EvenIdFilter">
</bean>

Using Simple Property
Suppose you want to add a property attribute to your bean.
I always use constructor injection, so I will use constructor-arg in the bean declaration.

<bean id="longNameExternalFilter"
    class="org.eyal.requestvalidation.flow.example.flow.itemsfilter.filters.NameTooLongFilter">
    <constructor-arg value="${filter.external.name.max}" />
</bean>

List Example
Suppose you have a class that gets a list (or set) of objects (either another bean class, or just Strings).
You can add it as a parameter in the constructor-arg, but I prefer to create the list outside the bean declaration and refer to it in the bean.
Here’s how:

<util:list id="defaultFilters">
  <ref bean="emptyNameFilter" />
  <ref bean="someOtherBean" />
</util:list>

And

<bean id="itemFiltersMapperByFlag"
  class="org.eyal.requestvalidation.flow.itemsfilter.ItemFiltersMapperByFlag">
   <constructor-arg ref="defaultFilters" />
   <constructor-arg ref="filtersByFlag" />
</bean>

Collection of Values in the Properties File
What if I want to set a list (set) of values to pass a bean.
Not a list of beans as described above.
The in the properties file I will put:
validation.acceptedIds=flow1,flow2,flow3,flow4,flow5

And in bean:

<bean id="acceptedIdsValidation"
  class="org.eyal.requestvalidation.flow.example.flow.requestvalidation.validations.AcceptedIdsValidation">
  <constructor-arg value="#{'${validation.acceptedIds}'.split(',')}" />
</bean>

See how I used Spring Expression Language (SpEL)

Map Injection Example
Here’s a sample of an empty map creation:

<util:map id="validationsByFlag">
</util:map>

Here’s a map with some entries.
See how the keys are also set from the properties file.

<util:map id="filtersByFlag">
  <entry key="${flag.external}" value-ref="filtersForExternal" />
  <entry key="${flag.internal}" value-ref="filtersForInternal" />
  <entry key="${flag.even}" value-ref="filtersForEven" />
  <entry key="${flag.odd}" value-ref="filtersForOdd" />
</util:map>


In the map example above we have keys as Strings from the properties file.
The values are reference to other beans as described above.

The usage would be the same as for list:

<bean id="itemFiltersMapperByFlag"
  class="org.eyal.requestvalidation.flow.itemsfilter.ItemFiltersMapperByFlag">
   <constructor-arg ref="defaultFilters" />
   <constructor-arg ref="filtersByFlag" />
</bean>

Conclusion
In this post I showed some basic examples of Spring configuration using XML and properties file.
I strongly believe that until the team is not fully understand the way Spring works, everyone should stick with this kind of configuration.
If you find that you start to get files, which are too big, then you may want to check your design. Annotations will just hide your poorly design system.

Spring and Maven Configuration

This is the first post of a series of posts demonstrating how we to use Spring in an application.
In the series I will show some howtos of technical aspects (context file, properties, etc.).
And I will also show some design aspects and test approach.

In this post I will simply show how to integrate Spring using Maven.

The basic dependency would be the context. Using Maven dependencies, spring-core will be in the project as well.

<dependency>
  <groupId>org.springframework</groupId>
  <artifactId>spring-context</artifactId>
  <version>${spring.version}</version>
</dependency>

If we want to use annotation such as @Inject which comes from Java JSR, we’ll add the following dependency:

<dependency>
  <groupId>javax.inject</groupId>
  <artifactId>javax.inject</artifactId>
  <version>1</version>
</dependency>

And in order to be able to test using Spring, here’s what we’ll need (in here, the scope is test):

<dependency>
  <groupId>org.springframework</groupId>
  <artifactId>spring-test</artifactId>
  <version>${spring.version}</version>
  <scope>test</scope>
</dependency>

You can see that I didn’t add spring-core as it comes with the context / test dependencies.

You can find the code at: https://github.com/eyalgo/request-validation

Some notes about the code.

I added the Spring code, context and the Spring’s Maven dependencies to the test environment.
This is on purpose.
I want to emphasize the separation of the validation-filter framework to the usage and wiring of an application.

In real life, you might have an external library that you’ll want to use it in a Spring injected application.
So the test environment in the code simulates the application and the src is the “external library”.

Request Validation and Filtering by Flags – Redesign and Refactoring

General
In the previous posts I started describing a validation / filtering framework we’re building.
While showing the code, I am trying to show clean code, test orientation and code evolution.
It has some agility in the process; We know the end requirements, but the exact details are evolving over time.

During the development we have changed the code to be more general as we saw some patterns in it.
The code evolved as the flow evolved as well.

The flow as we now understand it
Here’s a diagram of the flow we’ll implement

Request Sequence

Request Sequence

The Pattern
At each step of the sequence (validation, filtering, action), we recognized the same pattern:

  1. We have specific implementations (filters, validations)
  2. We have an engine that wraps up the specific implementations
  3. We need to map the implementations by flag, and upon request’s flags, select the appropriate implementations.
  4. We need to have a class that calls the mapper and then the engine

A diagram showing the pattern

The Pattern

The Pattern

Source Code
In order to show some of the evolution of the code, and how refactoring changed it, I added tags in GitHub after major changes.

Code Examples
Let’s see what came up from the mapper pattern.

public interface MapperByFlag<T> {
  List<T> getOperations(Request request);
}
public abstract class AbstractMapperByFlag<T> implements MapperByFlag<T> {
  private List<T> defaultOperations;
  private Map<String, List<T>> mapOfOperations;

  public AbstractMapperByFlag(List<T> defaultOperations, Map<String, List<T>> mapOfOperations) {
    this.defaultOperations = defaultOperations;
    this.mapOfOperations = mapOfOperations;
  }

  @Override
  public final List<T> getOperations(Request request) {
    Set<T> selectedFilters = Sets.newHashSet(defaultOperations);
    Set<String> flags = request.getFlags();
    for (String flag : flags) {
      if (mapOfOperations.containsKey(flag)) {
        selectedFilters.addAll(mapOfOperations.get(flag));
      }
    }
    return Lists.newArrayList(selectedFilters);
  }
}
  public RequestValidationByFlagMapper(List<RequestValidation> defaultValidations,
    map<String, List<RequestValidation>> mapOfValidations) {
    super(defaultValidations, mapOfValidations);
  }

  public ItemFiltersByFlagMapper(List<Filter> defaultFilters, Map<String, List<Filter>> mapOfFilters) {
    super(defaultFilters, mapOfFilters);
  }

I created a test for the abstract class, to show the flow itself.
The tests of the implementations use Java Reflection to verify that the correct injected parameters are sent to the super.
I am showing the imports here as well. To have some reference for the static imports, mockito and hamcrest packages and classes.

import static org.hamcrest.Matchers.containsInAnyOrder;
import static org.junit.Assert.assertThat;
import static org.mockito.Mockito.when;

import java.util.List;
import java.util.Map;

import org.eyal.requestvalidation.model.Request;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.runners.MockitoJUnitRunner;

import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Lists;
import com.google.common.collect.Sets;

@RunWith(MockitoJUnitRunner.class)
public class AbstractMapperByFlagTest {
	private final static String FLAG_1 = "flag 1";
	private final static String FLAG_2 = "flag 2";

	@Mock
	private Request request;

	private String defaultOperation1 = "defaultOperation1";
	private String defaultOperation2 = "defaultOperation2";
	private String mapOperation11 = "mapOperation11";
	private String mapOperation12 = "mapOperation12";
	private String mapOperation23 = "mapOperation23";

	private MapperByFlag<String> mapper;

	@Before
	public void setup() {
		List<String> defaults = Lists.newArrayList(defaultOperation1, defaultOperation2);
		Map<String, List<String>> mapped = ImmutableMap.<String, List<String>> builder()
		        .put(FLAG_1, Lists.newArrayList(mapOperation11, mapOperation12))
		        .put(FLAG_2, Lists.newArrayList(mapOperation23, mapOperation11)).build();
		mapper = new AbstractMapperByFlag<String>(defaults, mapped) {
		};
	}

	@Test
	public void whenRequestDoesNotHaveFlagsShouldReturnDefaultFiltersOnly() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet());

		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(defaultOperation1, defaultOperation2));
	}

	@Test
	public void whenRequestHasFlagsNotInMappingShouldReturnDefaultFiltersOnly() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet("un-mapped-flag"));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(defaultOperation1, defaultOperation2));
	}
	
	@Test
	public void whenRequestHasOneFlagShouldReturnWithDefaultAndMappedFilters() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet(FLAG_1));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(mapOperation12, defaultOperation1, mapOperation11, defaultOperation2));
	}
	
	@Test
	public void whenRequestHasTwoFlagsShouldReturnWithDefaultAndMappedFiltersWithoutDuplications() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet(FLAG_1, FLAG_2));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(mapOperation12, defaultOperation1, mapOperation11, defaultOperation2, mapOperation23));
	}
}
@RunWith(MockitoJUnitRunner.class)
public class RequestValidationByFlagMapperTest {

	@Mock
	private List<RequestValidation> defaultValidations;
    
	@Mock
	private Map<String, List<RequestValidation>> mapOfValidations;

	@InjectMocks
	private RequestValidationByFlagMapper mapper;

	@SuppressWarnings("unchecked")
    @Test
	public void verifyParameters() throws NoSuchFieldException, SecurityException, IllegalArgumentException,
	        IllegalAccessException {
		Field defaultOperationsField = AbstractMapperByFlag.class.getDeclaredField("defaultOperations");
		defaultOperationsField.setAccessible(true);
        List<RequestValidation> actualFilters = (List<RequestValidation>) defaultOperationsField.get(mapper);
		assertThat(actualFilters, sameInstance(defaultValidations));

		Field mapOfFiltersField = AbstractMapperByFlag.class.getDeclaredField("mapOfOperations");
		mapOfFiltersField.setAccessible(true);
		Map<String, List<RequestValidation>> actualMapOfFilters = (Map<String, List<RequestValidation>>) mapOfFiltersField.get(mapper);
		assertThat(actualMapOfFilters, sameInstance(mapOfValidations));
	}
}

To Do
There are other classes that might be candidate for refactoring of some sort.
RequestFlowValidation and RequestFilter are similar.
And
RequestValidationsEngineImpl and FiltersEngine

To Do 2
Create a Matcher for the reflection part.

Code
As always, all the code can be found at:

A Tag for this post: all-components-in

Conclusion
The infrastructure is almost done.
During this time we are also implementing actual classes for the flow (validations, filters, actions).
These are not covered in the posts, nor in GitHub.
The infrastructure will be wired to a service we have using Spring.
This will be explained in future posts.

Request Validation and Filtering by Flags – Filtering an Item

On a previous post, I introduced a system requirement of validating and filtering a request by setting flags on it.

Reference: Introduction

In this post I want to show the filtering system.

Here are general UML diagrams of the filtering components and sequence.

Filtering UML Diagram

General Components

public interface Item {
        String getName();
}
public interface Request {
        Set getFlags();
        List getItems();
}

Filter Mechanism (as described in the UML above)

public interface Filter extends Predicate {
	String errorMessage();
}

FilterEngine is a cool part, which takes several Filters and apply to each the items. Below you can see the code of it. Above, the sequence diagram shows how it’s done.

public class FiltersEngine {

	public FiltersEngine() {
	}

	public ItemsFilterResponse applyFilters(List filters, List items) {
		List validItems = Lists.newLinkedList(items);
		List invalidItemInformations = Lists.newLinkedList();
		for (Filter validator : filters) {
			ItemsFilterResponse responseFromFilter = responseFromFilter(validItems, validator);
			validItems = responseFromFilter.getValidItems();
			invalidItemInformations.addAll(responseFromFilter.getInvalidItemsInformations());
		}

		return new ItemsFilterResponse(validItems, invalidItemInformations);
	}

	private ItemsFilterResponse responseFromFilter(List items, Filter filter) {
		List validItems = Lists.newLinkedList();
		List invalidItemInformations = Lists.newLinkedList();
		for (Item item : items) {
			if (filter.apply(item)) {
				validItems.add(item);
			} else {
				invalidItemInformations.add(new InvalidItemInformation(item, filter.errorMessage()));
			}
		}
		return new ItemsFilterResponse(validItems, invalidItemInformations);
	}
}

And of course, we need to test it:

@RunWith(MockitoJUnitRunner.class)
public class FiltersEngineTest {
	private final static String MESSAGE_FOR_FILTER_1 = "FILTER - 1 - ERROR";
	private final static String MESSAGE_FOR_Filter_2 = "FILTER - 2 - ERROR";
	@Mock(name = "filter 1")
	private Filter singleFilter1;
	@Mock(name = "filter 2")
	private Filter singleFilter2;
	@Mock(name = "item 1")
	private Item item1;
	@Mock(name = "item 2")
	private Item item2;

	@InjectMocks
	private FiltersEngine filtersEngine;

	@Before
	public void setup() {
		when(singleFilter1.errorMessage()).thenReturn(MESSAGE_FOR_FILTER_1);
		when(singleFilter2.errorMessage()).thenReturn(MESSAGE_FOR_Filter_2);

		when(item1.getName()).thenReturn("name1");

		when(item2.getName()).thenReturn("name2");
	}

	@Test
	public void verifyThatAllSingleFiltersAreCalledForValidItems() {
		when(singleFilter1.apply(item1)).thenReturn(true);
		when(singleFilter1.apply(item2)).thenReturn(true);
		when(singleFilter2.apply(item1)).thenReturn(true);
		when(singleFilter2.apply(item2)).thenReturn(true);

		ItemsFilterResponse response = filtersEngine.applyFilters(Lists.newArrayList(singleFilter1, singleFilter2),
				Lists.newArrayList(item1, item2));
		assertThat("expected no invalid", response.getInvalidItemsInformations(),
				emptyCollectionOf(InvalidItemInformation.class));
		assertThat(response.getValidItems(), containsInAnyOrder(item1, item2));

		verify(singleFilter1).apply(item1);
		verify(singleFilter1).apply(item2);
		verify(singleFilter2).apply(item1);
		verify(singleFilter2).apply(item2);
		verifyNoMoreInteractions(singleFilter1, singleFilter2);
	}

	@SuppressWarnings("unchecked")
	@Test
	public void itemsFailIndifferentFiltersShouldGetOnlyFailures() {
		when(singleFilter1.apply(item1)).thenReturn(false);
		when(singleFilter1.apply(item2)).thenReturn(true);
		when(singleFilter2.apply(item2)).thenReturn(false);

		ItemsFilterResponse response = filtersEngine.applyFilters(Lists.newArrayList(singleFilter1, singleFilter2),
				Lists.newArrayList(item1, item2));
		assertThat(
				response.getInvalidItemsInformations(),
				containsInAnyOrder(matchInvalidInformation(new InvalidItemInformation(item1, MESSAGE_FOR_FILTER_1)),
						matchInvalidInformation(new InvalidItemInformation(item2, MESSAGE_FOR_Filter_2))));
		assertThat(response.getValidItems(), emptyCollectionOf(Item.class));

		verify(singleFilter1).apply(item1);
		verify(singleFilter1).apply(item2);
		verify(singleFilter1).errorMessage();
		verify(singleFilter2).apply(item2);
		verify(singleFilter2).errorMessage();
		verifyNoMoreInteractions(singleFilter1, singleFilter2);
	}

	@Test
	public void firstItemFailSecondItemSuccessShouldGetOneItemInEachList() {
		when(singleFilter1.apply(item1)).thenReturn(true);
		when(singleFilter1.apply(item2)).thenReturn(true);
		when(singleFilter2.apply(item1)).thenReturn(false);
		when(singleFilter2.apply(item2)).thenReturn(true);

		ItemsFilterResponse response = filtersEngine.applyFilters(Lists.newArrayList(singleFilter1, singleFilter2),
				Lists.newArrayList(item1, item2));
		assertThat(response.getInvalidItemsInformations(), contains(matchInvalidInformation(new InvalidItemInformation(item1,
				MESSAGE_FOR_Filter_2))));
		assertThat(response.getValidItems(), containsInAnyOrder(item2));

		verify(singleFilter1).apply(item1);
		verify(singleFilter1).apply(item2);
		verify(singleFilter2).apply(item1);
		verify(singleFilter2).apply(item2);
		verify(singleFilter2).errorMessage();
		verifyNoMoreInteractions(singleFilter1, singleFilter2);
	}

	private static BaseMatcher matchInvalidInformation(InvalidItemInformation expected) {
		return new InvalidItemInformationMatcher(expected);
	}

	private final static class InvalidItemInformationMatcher extends BaseMatcher {
		private InvalidItemInformation expected;

		private InvalidItemInformationMatcher(InvalidItemInformation expected) {
			this.expected = expected;
		}

		public boolean matches(Object itemInformation) {
			InvalidItemInformation actual = (InvalidItemInformation) itemInformation;
			return actual.getName().equals(expected.getName())
					&& actual.getErrorMessage().equals(expected.getErrorMessage());
		}

		public void describeTo(Description description) {
		}
	}
}

Some explanation about the test
You can see that I don’t care about the implementation of Filter. Actually, I don’t even have any implementation of it.
I also don’t have implementation of the Item nor the request.
You can see an example of how to create a BaseMatcher to be used with assertThat(…)

Coding
Try to see whether it is ‘clean’. Can you understand the story of the code? Can you tell what the code does by reading it line by line?

On the following post I will show how I applied the flag mapping to select the correct filters for a request.

You can find all the code in: https://github.com/eyalgo/request-validation

[Edit] Created tag Filtering_an_item before refactoring.

Request Validation and Filtering by Flags – Introduction

General

We are working on a service that should accept some kind of request.

The request has List of Items. In the response we need to tell the client whether the request is valid and also some information about each item: is it valid or not. If it’s valid, it will be persisted. If it’s not, it should be filtered out. So the response can have information of how many items are valid (and sent to be persisted) and list of information of the filtered out items.

The request has another metadata in it. It has collection (set) of flags. The filtering and validation is based on the flags of the request. So basically one request may be validated and filtered differently than the other, based on the flags of each request.

We might have general validations / filters that need to be applied to any request, whatever flags it has.

Request Validation and Filtering High level design

Design

Flags Mapping

We’ll hold a mapping of flag-to-filters, and flag-to-validation.

Request

Has flags and items.

Components

Filter, Filter-Engine, Flags-Mapper

Development Approach

Bottom Up

We have a basic request already, as the service is up and running, but we don’t have any infrastructure for flags, flag-mapping, validation and filtering.

We’ll work bottom up; create the mechanism for filtering, enhance the request and then wire it up using Spring.

Coding

I’ll try to show the code using tests, and the development using some kind of TDD approach.

I am using eclipse’s EclEmma for coverage.

General

By looking at the code, you can see usage of JUnit, Mockito, Hamcrest, Google-Guava.

You can also see small classes, and interface development approach.

Source Code

https://github.com/eyalgo/request-validation

Coding Exercise Introduction

As part of my job, I do a-lot of architectural designing, OOD, clean code, TDD and everything that thrives to be craftsmanship work.

However, I don’t get to have many problems such as tree traverse, BFS, DFS, lists etc.
We can name these kind of problems as CS1 and CS2 courses problems.
I also don‘t have the opportunity to learn new languages. We‘re writing in Java and there is no reason at the office to start learning a new language. At least not for business purpose.

But, as a professional developer, I want to constantly exercise, sharpen and improve my skills.
So I took upon myself a small project:

  1. Do some basic coding that I usually don’t do
  2. Learn a new language

As for task #1, I already wrote some Java code to problems I thought of, and will try to add more during the weeks to come.

As for task #2, I decides to start learning Ruby. Why Ruby? No particular reason. It’s different from Java and good in the market.
Once I get comfortable with Ruby, my plan is to write the problems in Ruby.

[EDIT]
The code is in GitHub
See Why at: Why GitHub
[EDIT]

Some of the code has nice written tests, and some, sadly to say, I just played around, it is not REALLY, AUTOMATICALLY testes. This is something that must be fixed as well.

These are the problems I already written:

  • Factorial
  • Fibonacci
  • Reverse a list
  • Anagram
  • Palindrome
  • BFS tree traverse

Code at GitHub: https://github.com/eyalgo/brainers-java

Getting Started with Google Guava – Book Review

I recently got my hands (my kindle) on the book: Getting Started with Google Guava by Bill Bejeck.

I love reading technical books and always hope to learn new stuff. As an extensive user of the Guava library, I was really intrigued to see what I was missing from this library and how I could improve the usage of it.

I will not go over it chapter by chapter with explanations, as anyone can check the TOC and see the details of what this book covers. Instead, I will try to give my own impression.

The book covers all aspects of the Guava library. For each aspect, the author shows the most used implementation and mentions other ones.

In nearly every chapter, I was introduced to some gems that immediately went into our own codebase when I started refactoring. That was FUN. And I saw code improvements instantly.

I really enjoyed reading the code examples with the extensive usage of JUnit as showcases for the behavior of the various classes. It’s a great way of showing what the library does. And as a side effect, it shows developers how a test is used as the specs of the code.

It seems that the author was very meticulous in writing clean and testable code. Two areas, which I think are, well, the most important for being a professional developer (a craftsman).

I think that this book is great for both newbies and experienced Guava users.
I think it is also great for developers who want to have some kind of knowledge on how to write clean and better code.