dropwizard-jobs – My First Open Source Contribution

I am very exited today.
Today I did an actual contribution to the open source community.
I helped publishing Java libraries to maven central.

The library we published is a plugin for dropwizard that uses quartz:
https://github.com/spinscale/dropwizard-jobs

You can check it out. The README explains how to use it.
In this post I will not explain the plugin, but I will share my contribution experience to an open source project.

Why Even Contribute

There are so many reasons. Google is full of them.
I did it because I really wanted to help the community (In this case, the originators of the code).
It improves my skill-sets. I know now more than I knew before.
Exposed to technologies and processes which I usually don’t use.
Part of my digital signature and branding.

Why This Project

I know about dropwizard for more than a year.
I didn’t have the chance to use it at work.
I did some experiments with dropwizard to get the filling of it.

In one of my POCs, I wanted to create a scheduling mechanism in the micro-service I created.
By searching Google, I found this project.
First of all, I liked what it does and how.
I also liked the explanation (how to use it). It’s clear and I could work with it immediately.
I think the developers did a good job.

How It (my contribution) All Started

But one thing was missing. It wasn’t in maven repository (central or any other public repository).
So I asked whether the developers plan to publish it.

Issue #10 in the repository shows my question and the beginning of the conversation.
Issue 10, question from 2015/02/24

Basically the problem was the time to spend in order to comply requirements. The code itself was working.

My Contribution

I took upon myself to publish it to public maven repository.
I have never done something like that, so I wasn’t sure what to do.
I thought of using bintray by JFrog.
Eventually I decided to use sonatype. It felt more comfortable. So I started reading about OSSRH (Open Source Project Repository Hosting).
There’s an explanation for that below.

I forked the code to my GitHub account and used pull requests in order to merge the code I pushed.
I mostly modified the pom files so comply Sonatype requirements as explained in the tutorials.

Once we were all set, I did the actual publishing.
And now it’s there. Everyone can use it.

At first I was extra careful with any change. After all, “it’s not my code”…
Over time, I felt more comfortable modifying and pull requesting.

How To Upload to Sonatype

I used the tutorials, which explain clearly what to do.
http://central.sonatype.org/pages/ossrh-guide.html

  1. Create a user at OSSRH
  2. Open an issue with links to GitHub. Group ID and artifact ID
  3. Follow instructions (In our case, I had to modify the maven’s groups ID)
  4. Add the correct plugins to the pom file
    maven
    pgp – read it carefully
  5. Deploy

Contributors

Linkedin Twitter facebook github

Advertisements

Dropwizard, MongoDB and Gradle Experimenting

Introduction

I created a small project using Dropwizard, MongoDB and Gradle.
It actually started as an experimenting Guava cache as buffer for sending counters to MongoDB (or any other DB).
I wanted to try Gradle with MongoDB plugin as well.
Next, I wanted to create some kind of interface to check this framework and I decided to try out DropWizard.
And this is how this project was created.

This post is not a tutorial of using any of the chosen technologies.
It is a small showcase, which I did as an experimentation.
I guess there are some flaws and maybe I am not using all “best practices”.
However, I do believe that the project, with the help of this post, can be a good starting point for the different technologies I used.
I also tried to show some design choices, which help achieving SRP, decoupling, cohesion etc.

I decided to begin the post with the use-case description and how I implemented it.
After that, I will explain what I did with Gradle, MongoDB (and embedded) and Dropwizard.

Before I begin, here’s the source code:
https://github.com/eyalgo/CountersBuffering

The Use-Case: Counters With Buffer

We have some input requests into our servers.
During the process of a request, we choose to “paint” it with some data (decided by some logic).
Some requests will be painted by Value-1, some by Value-2, etc. Some will not be painted at all.
We want to limit the number of painted requests (per paint value).
In order to have limit, for each paint-value, we know the maximum, but also need to count (per paint value) the number of painted requests.
As the system has several servers, the counters should be shared by all servers.

The latency is crucial. Normally we get 4-5 milliseconds per request processing (for all the flow. Not just the painting).
So we don’t want that increasing the counters will increase the latency.
Instead, we’ll keep a buffer, the client will send ‘increase’ to the buffer.
The buffer will periodically increase the repository with “bulk incremental”.

I know it is possible to use directly Hazelcast or Couchbase or some other similar fast in-memory DB.
But for our use-case, that was the best solution.

The principle is simple:

  • The dependent module will call a service to increase a counter for some key
  • The implementation keeps a buffer of counters per key
  • It is thread safe
  • The writing happens in a separate thread
  • Each write will do a bulk increase
Counters High Level Design

Counters High Level Design

Buffer

For the buffer, I used Google Guava cache.

Buffer Structure

private final LoadingCache<Counterable, BufferValue> cache;
...

this.cache = CacheBuilder.newBuilder()
	.maximumSize(bufferConfiguration.getMaximumSize())
	.expireAfterWrite(bufferConfiguration.getExpireAfterWriteInSec(), TimeUnit.SECONDS)
	.expireAfterAccess(bufferConfiguration.getExpireAfterAccessInSec(), TimeUnit.SECONDS)
	.removalListener((notification) -> increaseCounter(notification))
	.build(new BufferValueCacheLoader());
...

(Counterable is described below)

BufferValueCacheLoader implements the interface CacheLoader.
When we call increase (see below), we first get from the cache by key.
If the key does not exist, the loader returns value.

public class BufferValueCacheLoader extends CacheLoader<Counterable, BufferValue> {
	@Override
	public BufferValue load(Counterable key) {
		return new BufferValue();
	}
}

BufferValue wraps an AtomicInteger (I would need to change it to Long at some point)

Increase the Counter

public void increase(Counterable key) {
	BufferValue meter = cache.getUnchecked(key);
	int currentValue = meter.increment();
	if (currentValue > threashold) {
		if (meter.compareAndSet(currentValue, currentValue - threashold)) {
			increaseCounter(key, threashold);
		}
	}
}

When increasing a counter, we first get current value from cache (with the help of the loader. As descried above).
The compareAndSet will atomically check if has same value (not modified by another thread).
If so, it will update the value and return true.
If success (returned true), the the buffer calls the updater.

View the buffer

After developing the service, I wanted a way to view the buffer.
So I implemented the following method, which is used by the front-end layer (Dropwizard’s resource).
Small example of Java 8 Stream and Lambda expression.

return ImmutableMap.copyOf(cache.asMap())
	.entrySet().stream()
	.collect(
		Collectors.toMap((entry) -> entry.getKey().toString(),
		(entry) -> entry.getValue().getValue()));

MongoDB

I chose MongoDB because of two reasons:

  1. We have similar implementation in our system, which we decided to use MongoDB there as well.
  2. Easy to use with embedded server.

I tried to design the system so it’s possible to choose any other persist implementation and change it.

I used morphia as the MongoDB client layer instead of using directly the Java client.
With Morphia you create a dao, which is the connection to a MongoDB collection.
You also declare a simple Java Bean (POJO), that represent a document in a collection.
Once you have the dao, you can do operations on the collection the “Java way”, with fairly easy API.
You can have queries and any other CRUD operations, and more.

I had two operations: increasing counter and getting all counters.
The services implementations do not extend Morphia’s BasicDAO, but instead have a class that inherits it.
I used composition (over inheritance) because I wanted to have more behavior for both services.

In order to be consistent with the key representation, and to hide the way it is implemented from the dependent code, I used an interface: Counterable with a single method: counterKey().

public interface Counterable {
	String counterKey();
}
final class MongoCountersDao extends BasicDAO<Counter, ObjectId> {
	MongoCountersDao(Datastore ds) {
		super(Counter.class, ds);
	}
}

Increasing the Counter

@Override
protected void increaseCounter(String key, int value) {
	Query<Counter> query = dao.createQuery();
	query.criteria("id").equal(key);
	UpdateOperations<Counter> ops = dao.getDs().createUpdateOperations(Counter.class).inc("count", value);
	dao.getDs().update(query, ops, true);
}

Embedded MongoDB

In order to run tests on the persistence layer, I wanted to use an in-memory database.
There’s a MongoDB plugin for that.
With this plugin you can run a server by just creating it on runtime, or run as goal in maven / task in Gradle.
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo
https://github.com/sourcemuse/GradleMongoPlugin

Embedded MongoDB on Gradle

I will elaborate more on Gradle later, but here’s what I needed to do in order to set the embedded mongo.

dependencies {
	// More dependencies here
	testCompile 'com.sourcemuse.gradle.plugin:gradle-mongo-plugin:0.4.0'
}

Setup Properties

mongo {
	//	logFilePath: The desired log file path (defaults to 'embedded-mongo.log')
	logging 'console'
	mongoVersion 'PRODUCTION'
	port 12345
	//	storageLocation: The directory location from where embedded Mongo will run, such as /tmp/storage (defaults to a java temp directory)
}

Embedded MongoDB Gradle Tasks

startMongoDb will just start the server. It will run until stopping it.
stopMongoDb will stop it.
startManagedMongoDb test , two tasks, which will start the embedded server before the tests run. The server will shut down when the jvm finishes (the tests finish)

Gradle

https://gradle.org/
Although I only touch the tip of the iceberg, I started seeing the strength of Gradle.
It wasn’t even that hard setting up the project.

Gradle Setup

First, I created a Gradle project in eclipse (after installing the plugin).
I needed to setup the dependencies. Very simple. Just like maven.

One Big JAR Output

When I want to create one big jar from all libraries in Maven, I use the shade plugin.
I was looking for something similar, and found gradle-one-jar pluging.
https://github.com/rholder/gradle-one-jar
I added that plugin
apply plugin: 'gradle-one-jar'
Added one-jar to classpath:

buildscript {
	repositories { mavenCentral() }
	dependencies {
		classpath 'com.sourcemuse.gradle.plugin:gradle-mongo-plugin:0.4.0'
		classpath 'com.github.rholder:gradle-one-jar:1.0.4'
	}
}

And added a task:

mainClassName = 'org.eyalgo.server.dropwizard.CountersBufferApplication'
task oneJar(type: OneJar) {
	mainClass = mainClassName
	archiveName = 'counters.jar'
	mergeManifestFromJar = true
}

Those were the necessary actions I needed to do in order to make the application run.

Dropwizard

Dropwizard is a stack of libraries that makes it easy to create web servers quickly.
It uses Jetty for HTTP and Jersey for REST. It has other mature libraries to create complicated services.
It can be used as an easy developed microservice.

As I explained in the introduction, I will not cover all of Dropwizard features and/or setup.
There are plenty of sites for that.
I will briefly cover the actions I did in order to make the application run.

Gradle Run Task

run { args 'server', './src/main/resources/config/counters.yml' }
First argument is server. Second argument is the location of the configuration file.
If you don’t give Dropwizard the first argument, you will get a nice error message of the possible options.

positional arguments:
  {server,check}         available commands

I already showed how to create one jar in the Gradle section.

Configuration

In Dropwizard, you setup the application using a class that extends Configuration.
The fields in the class should align to the properties in the yml configuration file.

It is a good practice to put the properties in groups, based on their usage/responsibility.
For example, I created a group for mongo parameters.

In order for the configuration class to read the sub groups correctly, you need to create a class that align to the properties in the group.
Then, in the main configuration, add this class as a member and mark it with annotation: @JsonProperty.
Example:

@JsonProperty("mongo")
private MongoServicesFactory servicesFactory = new MongoServicesFactory();
@JsonProperty("buffer")
private BufferConfiguration bufferConfiguration = new BufferConfiguration();

Example: Changing the Ports

Here’s part of the configuration file that sets the ports for the application.

server:
  adminMinThreads: 1
  adminMaxThreads: 64
  applicationConnectors:
    - type: http
      port: 9090
  adminConnectors:
    - type: http
      port: 9091

Health Check

Dropwizard gives basic admin API out of the box. I changed the port to 9091.
I created a health check for MongoDB connection.
You need to extend HealthCheck and implement check method.

private final MongoClient mongo;
...
protected Result check() throws Exception {
	try {
		mongo.getDatabaseNames();
		return Result.healthy();
	} catch (Exception e) {
		return Result.unhealthy("Cannot connect to " + mongo.getAllAddress());
	}
}

Other feature are pretty much self-explanatory or simple as any getting started tutorial.

Ideas for Enhancement

The are some things I may try to add.

  • Add tests to the Dropwizard section.
    This project started as PoC, so I, unlike usually, skipped the tests in the server part.
    Dropwizard has Testing Dropwizard, which I want to try.
  • Different persistence implementation. (couchbase? Hazelcast?).
  • Injection using Google Guice. And with help of that, inject different persistence implementation.

That’s all.
Hope that helps.

Source code: https://github.com/eyalgo/CountersBuffering

Linkedin Twitter facebook github

Java 8 Stream and Lambda Expressions – Parsing File Example

Recently I wanted to extract certain data from an output log.
Here’s part of the log file:

2015-01-06 11:33:03 b.s.d.task [INFO] Emitting: eVentToRequestsBolt __ack_ack [-6722594615019711369 -1335723027906100557]
2015-01-06 11:33:03 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package com.foo.bar
2015-01-06 11:33:04 b.s.d.executor [INFO] Processing received message source: eventToManageBolt:2, stream: __ack_ack, id: {}, [-6722594615019711369 -1335723027906100557]
2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package co.il.boo
2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package dot.org.biz

I decided to do it using the Java8 Stream and Lambda Expression features.

Read the file
First, I needed to read the log file and put the lines in a Stream:

Stream<String> lines = Files.lines(Paths.get(args[1]));

Filter relevant lines
I needed to get the packages names and write them into another file.
Not all lines contained the data I need, hence filter only relevant ones.

lines.filter(line -> line.contains("===---> Loaded package"))

Parsing the relevant lines
Then, I needed to parse the relevant lines.
I did it by first splitting each line to an array of Strings and then taking the last element in that array.
In other words, I did a double mapping. First a line to an array and then an array to a String.

.map(line -> line.split(" "))
.map(arr -> arr[arr.length - 1])

Writing to output file
The last part was taking each string and write it to a file. That was the terminal operation.

.forEach(packageName -> writeToFile(fw, packageName));

writeToFile is a method I created.
The reason is that Java File System throws IOException. You can’t use checked exceptions in lambda expressions.

Here’s a full example (note, I don’t check input)

import java.io.FileWriter;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Stream;

public class App {
	public static void main(String[] args) throws IOException {
		Stream<String> lines = null;
		if (args.length == 2) {
			lines = Files.lines(Paths.get(args[1]));
		} else {
			String s1 = "2015-01-06 11:33:03 b.s.d.task [INFO] Emitting: adEventToRequestsBolt __ack_ack [-6722594615019711369 -1335723027906100557]";
			String s2 = "2015-01-06 11:33:03 b.s.d.executor [INFO] Processing received message source: eventToManageBolt:2, stream: __ack_ack, id: {}, [-6722594615019711369 -1335723027906100557]";
			String s3 = "2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package com.foo.bar";
			String s4 = "2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package co.il.boo";
			String s5 = "2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package dot.org.biz";
			List<String> rows = Arrays.asList(s1, s2, s3, s4, s5);
			lines = rows.stream();
		}
		
		new App().parse(lines, args[0]);

	}
	
	private void parse(Stream<String> lines, String output) throws IOException {
		final FileWriter fw = new FileWriter(output);
		
		//@formatter:off
		lines.filter(line -> line.contains("===---> Loaded package"))
		.map(line -> line.split(" "))
		.map(arr -> arr[arr.length - 1])
		.forEach(packageName-> writeToFile(fw, packageName));
		//@formatter:on
		fw.close();
		lines.close();
	}

	private void writeToFile(FileWriter fw, String packageName) {
		try {
			fw.write(String.format("%s%n", packageName));
		} catch (IOException e) {
			throw new RuntimeException(e);
		}
	}

}

Linkedin Twitter facebook github

Playing With Java Concurrency

Recently I needed to transform some filet that each has a list (array) of objects in JSON format to files that each has separated lines of the same data (objects).

It was a one time task and simple one.
I did the reading and writing using some feature of Java nio.
I used GSON in the simplest way.
One thread runs over the files, converts and writes.

The whole operation finished in a few seconds.

However, I wanted to play a little bit with concurrency.
So I enhanced the tool to work concurrently:

Threads
Runnable for reading file.
The reader threads are submitted to ExecutorService.
The output, which is a list of objects (User in the example), will be put in a BlockingQueue.

Runnable for writing file.
Each runnable will poll from the blocking queue.
It will write lines of data to a file.
I don’t add the writer Runnable to the ExecutorService, but instead just start a thread with it.
The runnable has a while(some boolen is true) {...} pattern.
More about that below…

Synchronizing Everything
BlockingQueue is the interface of both types of threads.

As the writer runnable runs in a while loop (consumer), I wanted to be able to make it stop so the tool will terminate.
So I used two objects for that:

Semaphore
The loop that reads the input files increments a counter.
Once I finished traversing the input files and submitted the writers, I initialized a semaphore in the main thread:
semaphore.acquire(numberOfFiles);

In each reader runable, I released the semaphore:
semaphore.release();

AtomicBoolean
The while loop of the writers uses an AtomicBoolean.
As long as AtomicBoolean==true, the writer will continue.

In the main thread, just after the acquire of the semaphore, I set the AtomicBoolean to false.
This enables the writer threads to terminate.

Using Java NIO
In order to scan, read and write the file system, I used some features of Java NIO.

Scanning: Files.newDirectoryStream(inputFilesDirectory, "*.json");
Deleting output directory before starting: Files.walkFileTree...
BufferedReader and BufferedWriter: Files.newBufferedReader(filePath); Files.newBufferedWriter(fileOutputPath, Charset.defaultCharset());

One note. In order to generate random files for this example, I used apache commons lang: RandomStringUtils.randomAlphabetic
All code in GitHub.

public class JsonArrayToJsonLines {
	private final static Path inputFilesDirectory = Paths.get("src\\main\\resources\\files");
	private final static Path outputDirectory = Paths
			.get("src\\main\\resources\\files\\output");
	private final static Gson gson = new Gson();
	
	private final BlockingQueue<EntitiesData> entitiesQueue = new LinkedBlockingQueue<>();
	
	private AtomicBoolean stillWorking = new AtomicBoolean(true);
	private Semaphore semaphore = new Semaphore(0);
	int numberOfFiles = 0;

	private JsonArrayToJsonLines() {
	}

	public static void main(String[] args) throws IOException, InterruptedException {
		new JsonArrayToJsonLines().process();
	}

	private void process() throws IOException, InterruptedException {
		deleteFilesInOutputDir();
		final ExecutorService executorService = createExecutorService();
		DirectoryStream<Path> directoryStream = Files.newDirectoryStream(inputFilesDirectory, "*.json");
		
		for (int i = 0; i < 2; i++) {
			new Thread(new JsonElementsFileWriter(stillWorking, semaphore, entitiesQueue)).start();
		}

		directoryStream.forEach(new Consumer<Path>() {
			@Override
			public void accept(Path filePath) {
				numberOfFiles++;
				executorService.submit(new OriginalFileReader(filePath, entitiesQueue));
			}
		});
		
		semaphore.acquire(numberOfFiles);
		stillWorking.set(false);
		shutDownExecutor(executorService);
	}

	private void deleteFilesInOutputDir() throws IOException {
		Files.walkFileTree(outputDirectory, new SimpleFileVisitor<Path>() {
			@Override
			public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
				Files.delete(file);
				return FileVisitResult.CONTINUE;
			}
		});
	}

	private ExecutorService createExecutorService() {
		int numberOfCpus = Runtime.getRuntime().availableProcessors();
		return Executors.newFixedThreadPool(numberOfCpus);
	}

	private void shutDownExecutor(final ExecutorService executorService) {
		executorService.shutdown();
		try {
			if (!executorService.awaitTermination(120, TimeUnit.SECONDS)) {
				executorService.shutdownNow();
			}

			if (!executorService.awaitTermination(120, TimeUnit.SECONDS)) {
			}
		} catch (InterruptedException ex) {
			executorService.shutdownNow();
			Thread.currentThread().interrupt();
		}
	}


	private static final class OriginalFileReader implements Runnable {
		private final Path filePath;
		private final BlockingQueue<EntitiesData> entitiesQueue;

		private OriginalFileReader(Path filePath, BlockingQueue<EntitiesData> entitiesQueue) {
			this.filePath = filePath;
			this.entitiesQueue = entitiesQueue;
		}

		@Override
		public void run() {
			Path fileName = filePath.getFileName();
			try {
				BufferedReader br = Files.newBufferedReader(filePath);
				User[] entities = gson.fromJson(br, User[].class);
				System.out.println("---> " + fileName);
				entitiesQueue.put(new EntitiesData(fileName.toString(), entities));
			} catch (IOException | InterruptedException e) {
				throw new RuntimeException(filePath.toString(), e);
			}
		}
	}

	private static final class JsonElementsFileWriter implements Runnable {
		private final BlockingQueue<EntitiesData> entitiesQueue;
		private final AtomicBoolean stillWorking;
		private final Semaphore semaphore;

		private JsonElementsFileWriter(AtomicBoolean stillWorking, Semaphore semaphore,
				BlockingQueue<EntitiesData> entitiesQueue) {
			this.stillWorking = stillWorking;
			this.semaphore = semaphore;
			this.entitiesQueue = entitiesQueue;
		}

		@Override
		public void run() {
			while (stillWorking.get()) {
				try {
					EntitiesData data = entitiesQueue.poll(100, TimeUnit.MILLISECONDS);
					if (data != null) {
						try {
							String fileOutput = outputDirectory.toString() + File.separator + data.fileName;
							Path fileOutputPath = Paths.get(fileOutput);
							BufferedWriter writer = Files.newBufferedWriter(fileOutputPath, Charset.defaultCharset());
							for (User user : data.entities) {
								writer.append(gson.toJson(user));
								writer.newLine();
							}
							writer.flush();
							System.out.println("=======================================>>>>> " + data.fileName);
						} catch (IOException e) {
							throw new RuntimeException(data.fileName, e);
						} finally {
							semaphore.release();
						}
					}
				} catch (InterruptedException e1) {
				}
			}
		}
	}

	private static final class EntitiesData {
		private final String fileName;
		private final User[] entities;

		private EntitiesData(String fileName, User[] entities) {
			this.fileName = fileName;
			this.entities = entities;
		}
	}
}

Linkedin Twitter facebook github

JUnit Rules

Introduction
In this post I would like to show an example of how to use JUnit Rule to make testing easier.

Recently I inherited a rather complex system, which not everything is tested. And even the tested code is complex.
Mostly I see lack of test isolation.
(I will write a different blog about working with Legacy Code).

One of the test (and code) I am fixing actually tests several components together.
It also connect to the DB. It tests some logic and intersection between components.
When the code did not compile in a totally different location, the test could not run because it loaded all Spring context.
The structure was that before testing (any class) all Spring context was initiated.
The tests extend BaseTest, which loads all Spring context.

BaseTest also cleans the DB in the @After method.

Important note: This article is about changing tests, which are not structured entirely correct.
When creating new code and tests they should be isolated, testi one thing etc.
Better tests should use mock DB / dependencies etc.
After I fix the test and refactor, I’ll have confidence making more changes.

Back to our topic…
So, what I got is slow run of the test suit, no isolation and even problem running tests due to unrelated problems.

So I decided separating the context loading with DB connection and both of them from the cleaning up of the database.

Approach
In order to achieve that I did three things:
The first was to change inheritance of the test class.
It stopped inheriting BaseTest.
Instead it inherits AbstractJUnit4SpringContextTests
Now I can create my own context per test and not load everything.

Now I needed two rules, a @ClassRule and @Rule
@ClassRule will be responsible for DB connection
@Rule will cleanup the DB after / before each test

But first, what are JUnit Rules?
A short explanation would be that they provide a possibility to intercept test method, similar to AOP concept.
@Rule allows us to intercept method before and after the actual run of the method.
@ClassRule intercepts test class run.
A very known @Rule is JUnit’s TemporaryFolder.

(Similar to @Before, @After and @BeforeClass).

Creating @Rule
The easy part was to create a Rule that cleanup the DB before and after a test method.
You need to implement TestRule, which has one method: Statement apply(Statement base, Description description);
You can do a-lot with it.
I found out that usually I will have an inner class that extends Statement.
The rule I created did not create the DB connection, but got it in the constructor.

Here’s the full code:

Creating @ClassRule
ClassRule is actually also TestRule.
The only difference from Rule is how we use it in our test code.
I’ll show it below.

The challenge in creating this rule was that I wanted to use Spring context to get the correct connection.
Here’s the code:
(ExternalResource is TestRule)

(Did you see that I could make DbCleanupRule inherit ExternalResource?)

Using it
The last part is how we use the rules.
A @Rule must be public field.
A @ClassRule must be public static field.

And there it is:

That’s all.
Hope it helps.

Eyal

[Edit]
I got some good remarks from Logan Mzz at DZone: http://java.dzone.com/articles/junit-rules#comment-125673

  1. Link to Junit Rules: https://github.com/junit-team/junit/wiki/Rules
  2. There’s ErrorCollector rule, which avoids annoying test-fail-fix cycles for a single test.
  3. And RuleChain, which described in the comment

Linkedin Twitter facebook github

RSS Reader Using: ROME, Spring MVC, Embedded Jetty

In this post I will show some guidlines to create a Spring web application, running it using Jetty and using an external library called ROME for RSS reading.

General

I have recently created a sample web application that acts as an RSS reader.
I wanted to examine ROME for RSS reading.
I also wanted to create the application using Spring container and MVC for the simplest view.
For rapid development, I used Jetty as the server, using a simple java class for it.
All the code can be found at GitHub, eyalgo/rss-reader.

Content

  1. Maven Dependencies
  2. Jetty Server
  3. Spring Dependency
  4. Spring MVC
  5. ROME

Maven Dependencies

At first, I could not get the correct Jetty version to use.
There is one with group-id mortby, and another by eclipse.
After some careful examination and trial and error, I took the eclipse’s library.
Spring is just standard.
I found ROME with newest version under GutHub. It’s still a SNAPSHOT.

Here’s the list of the dependencies:

  • Spring
  • jetty
  • rome and rome-fetcher
  • logback and slf4j
  • For Testing
    • Junit
    • mockito
    • hamcrest
    • spring-test

The project’s pom file can be found at: https://github.com/eyalgo/rss-reader/blob/master/pom.xml

Jetty Server

A few years ago I’ve been working using Wicket framework and got to know Jetty, and its easy usage for creating a server.
I decided to go in that direction and to skip the standard web server running with WAR deployment.

There are several ways to create the Jetty server.
I decided to create the server, using a web application context.

First, create the context:

private WebAppContext createContext() {
  WebAppContext webAppContext = new WebAppContext();
  webAppContext.setContextPath("/");
  webAppContext.setWar(WEB_APP_ROOT);
  return webAppContext;
}

Then, create the server and add the context as handler:

  Server server = new Server(port);
  server.setHandler(webAppContext);

Finally, start the server:

  try {
    server.start();
  } catch (Exception e) {
    LOGGER.error("Failed to start server", e);
    throw new RuntimeException();
  }

Everything is under https://github.com/eyalgo/rss-reader/tree/master/src/test/java/com/eyalgo/rssreader/server

Spring Project Structure

RSS Reader Project Structure

RSS Reader Project Structure

Spring Dependency

In web.xml I am declaring application-context.xml and web-context.xml .
In web-context.xml , I am telling Spring were to scan for components:
<context:component-scan base-package="com.eyalgo.rssreader"/>
In application-context.xml I am adding a bean, which is an external class and therefore I can’t scan it (use annotations):
<bean id="fetcher" class="org.rometools.fetcher.impl.HttpURLFeedFetcher"/>

Besides scanning, I am adding the correct annotation in the correct classes.
@Repository
@Service
@Controller

@Autowired

Spring MVC

In order to have some basic view of the RSS feeds (and atoms), I used a simple MVC and JSP pages.
To create a controller, I needed to add @Controller for the class.
I added @RequestMapping("/rss") so all requests should be prefixed with rss.

Each method has a @RequestMapping declaration. I decided that everything is GET.

Adding a Parameter to the Request

Just add @RequestParam("feedUrl") before the parameter of the method.

Redirecting a Request

After adding an RSS location, I wanted to redirect the answer to show all current RSS items.
So the method for adding an RSS feed needed to return a String.
The returned value is: “redirect:all”.

  @RequestMapping(value = "feed", method = RequestMethod.GET)
  public String addFeed(@RequestParam("feedUrl") String feedUrl) {
    feedReciever.addFeed(feedUrl);
    return "redirect:all";
  }

Return a ModelAndView Class

In Spring MVC, when a method returns a String, the framework looks for a JSP page with that name.
If there is none, then we’ll get an error.
(If you want to return just the String, you can add @ResponseBody to the method.)

In order to use ModelAndView, you need to create one with a name:
ModelAndView modelAndView = new ModelAndView("rssItems");
The name will tell Spring MVC which JSP to refer to.
In this example, it will look for rssItems.jsp.

Then you can add to the ModelAndView “objects”:

  List<FeedItem> items = itemsRetriever.get();
  ModelAndView modelAndView = new ModelAndView("rssItems");
  modelAndView.addObject("items", items);

In the JSP page, you need to refer the names of the objects you added.
And then, you can access their properties.
So in this example, we’ll have the following in rssItems.jsp:

  <c:forEach items="${items}" var="item">
    <div>
      <a href="${item.link}" target="_blank">${item.title}</a><br>
        ${item.publishedDate}
    </div>
  </c:forEach>

Note
Spring “knows” to add jsp as a suffix to the ModelAndView name because I declared it in web-context.xml.
In the bean of class: org.springframework.web.servlet.view.InternalResourceViewResolver.
By setting the prefix this bean also tells Spring were to look for the jsp pages.
Please look:
https://github.com/eyalgo/rss-reader/blob/master/src/main/java/com/eyalgo/rssreader/web/RssController.java
https://github.com/eyalgo/rss-reader/blob/master/src/main/webapp/WEB-INF/views/rssItems.jsp

Error Handling

There are several ways to handle errors in Spring MVC.
I chose a generic way, in which for any error, a general error page will be shown.

First, add @ControllerAdvice to the class you want to handle errors.

Second, create a method per type of exception you want to catch.
You need to annotate the method with @ExceptionHandler. The parameter tells which exception this method will handle.

You can have a method for IllegalArgumentException and another for different exception and so on.

The return value can be anything and it will act as normal controller. That means, having a jsp (for example) with the name of the object the method returns.

In this example, the method catches all exception and activates error.jsp, adding the message to the page.

  @ExceptionHandler(Exception.class)
  public ModelAndView handleAllException(Exception e) {
    ModelAndView model = new ModelAndView("error");
    model.addObject("message", e.getMessage());
    return model;
  }

ROME

ROME is an easy to use library for handling RSS feeds.
https://github.com/rometools/rome
rome-fetcher is an additional library that helps getting (fetching) RSS feeds from external sources, such as HTTP, or URL.
https://github.com/rometools/rome-fetcher

As of now, the latest build is 2.0.0-SNAPSHOT

An example on how to read an input RSS XML file can be found at:
https://github.com/eyalgo/rss-reader/blob/master/src/test/java/com/eyalgo/rssreader/runners/MetadataFeedRunner.java

To make life easier, I used rome-fetcher.
It gives you the ability to give a URL (RSS feed) and have all the SyndFeed out of it.

If you want, you can add caching, so it won’t download cached items (items that were already downloaded).
All you need is to create the fetcher with FeedFetcherCache parameter in the constructor.

Usage:

  @Override
  public List<FeedItem> extractItems(String feedUrl) {
    try {
      List<FeedItem> result = Lists.newLinkedList();
      URL url = new URL(feedUrl);
      SyndFeed feed = fetcher.retrieveFeed(url);
      List<SyndEntry> entries = feed.getEntries();
      for (SyndEntry entry : entries) {
        result.add(new FeedItem(entry.getTitle(), entry.getLink(), entry.getPublishedDate()));
      }
      return result;
    } catch (IllegalArgumentException | IOException | FeedException | FetcherException e) {
      throw new RuntimeException("Error getting feed from " + feedUrl, e);
    }
}

https://github.com/eyalgo/rss-reader/blob/master/src/main/java/com/eyalgo/rssreader/service/rome/RomeItemsExtractor.java

Note
If you get a warning message (looks as System.out) that tells that fetcher.properties is missing, just add an empty file under resources (or in the root of the classpath).

Summary

This post covered several topics.
You can also have a look at the way a lot of the code is tested.
Check Matchers and mocks.

If you have any remarks, please drop a note.

Eyal

Linkedin Twitter facebook github