dropwizard-jobs – My First Open Source Contribution

I am very exited today.
Today I did an actual contribution to the open source community.
I helped publishing Java libraries to maven central.

The library we published is a plugin for dropwizard that uses quartz:
https://github.com/spinscale/dropwizard-jobs

You can check it out. The README explains how to use it.
In this post I will not explain the plugin, but I will share my contribution experience to an open source project.

Why Even Contribute

There are so many reasons. Google is full of them.
I did it because I really wanted to help the community (In this case, the originators of the code).
It improves my skill-sets. I know now more than I knew before.
Exposed to technologies and processes which I usually don’t use.
Part of my digital signature and branding.

Why This Project

I know about dropwizard for more than a year.
I didn’t have the chance to use it at work.
I did some experiments with dropwizard to get the filling of it.

In one of my POCs, I wanted to create a scheduling mechanism in the micro-service I created.
By searching Google, I found this project.
First of all, I liked what it does and how.
I also liked the explanation (how to use it). It’s clear and I could work with it immediately.
I think the developers did a good job.

How It (my contribution) All Started

But one thing was missing. It wasn’t in maven repository (central or any other public repository).
So I asked whether the developers plan to publish it.

Issue #10 in the repository shows my question and the beginning of the conversation.
Issue 10, question from 2015/02/24

Basically the problem was the time to spend in order to comply requirements. The code itself was working.

My Contribution

I took upon myself to publish it to public maven repository.
I have never done something like that, so I wasn’t sure what to do.
I thought of using bintray by JFrog.
Eventually I decided to use sonatype. It felt more comfortable. So I started reading about OSSRH (Open Source Project Repository Hosting).
There’s an explanation for that below.

I forked the code to my GitHub account and used pull requests in order to merge the code I pushed.
I mostly modified the pom files so comply Sonatype requirements as explained in the tutorials.

Once we were all set, I did the actual publishing.
And now it’s there. Everyone can use it.

At first I was extra careful with any change. After all, “it’s not my code”…
Over time, I felt more comfortable modifying and pull requesting.

How To Upload to Sonatype

I used the tutorials, which explain clearly what to do.
http://central.sonatype.org/pages/ossrh-guide.html

  1. Create a user at OSSRH
  2. Open an issue with links to GitHub. Group ID and artifact ID
  3. Follow instructions (In our case, I had to modify the maven’s groups ID)
  4. Add the correct plugins to the pom file
    maven
    pgp – read it carefully
  5. Deploy

Contributors

Linkedin Twitter facebook github

Fedora Installation

Aggregate Installation Tips

One of the reasons I am writing this blog, is to keep “log” for myself on how I resolved issues.

In this post I will describe how I installed several basic development tools on a Fedora OS.
I want this laptop to be my workstation for out-of-work projects.

Almost everything in this post can be found elsewhere in the web.
Actually, most of what I am writing here is from other links.

However, this post is intended to aggregate several installations together.

If you’re new to Linux (or not an expert, as I am not), you can learn some basic stuff here.
How to install (yum), how to create from source code, how to setup environment variables and maybe other stuff.

First, we’ll start with how I installed Fedora.

Installing Fedora

I downloaded Fedora ISO from https://getfedora.org/en/workstation/.
It is Gnome distribution.
I then used http://www.linuxliveusb.com/ to create a self bootable USB. It’s very easy to use.
I switched to KDE by running: sudo yum install @kde-desktop

Installing Java

Download the rpm package Oracle site.

# root
su -
# Install JDK in system
rpm -Uvh /path/.../jdk-8u40-linux-i586.rpm
# Use correct Java
alternatives --install /usr/bin/java java /usr/java/latest/jre/bin/java 2000000
alternatives --install /usr/bin/javac javac /usr/java/latest/bin/javac 2000000
alternatives --install /usr/bin/javaws javaws /usr/java/latest/jre/bin/javaws 2000000
alternatives --install /usr/bin/jar jar /usr/java/latest/bin/jar 2000000
# Example how to swap javac
# alternatives --config javac
view raw install-jdk.sh hosted with ❤ by GitHub

Under /etc/profile.d/ , create a file (jdk_home.sh) with the following content:

# Put this file under /etc/profile.d
export JAVA_HOME=/usr/java/latest
export PATH=$PATH:JAVA_HOME/bin
view raw jdk_home.sh hosted with ❤ by GitHub

I used the following link, here’d how to install JDK
http://www.if-not-true-then-false.com/2014/install-oracle-java-8-on-fedora-centos-rhel/

Installing Intellij

Location: https://www.jetbrains.com/idea/download/

# root
su -
# Create IntelliJ location
mkdir -p /opt/idea
# Untar installation
tar -xvzf /path/.../ideaIC-14.1.tar.gz -C /opt/idea
# Create link for latest IntelliJ
ln -s /opt/idea/idea-IC-141.177.4/ /opt/idea/latest
chmod -R +r /opt/idea
view raw idea-install.sh hosted with ❤ by GitHub

Check https://www.jetbrains.com/idea/help/basics-and-installation.html

After installation, you can go to /opt/idea/latest/bin and run idea.sh
Once you run it, you will be prompt to create a desktop entry.
You can create a command line launcher later on as well.

Installing eclipse

Location: http://www.eclipse.org/downloads/

su -
# create eclipse location
mkdir /opt/eclipse
# Unzip it
tar -xvzf /path/.../eclipse-java-luna-SR2-linux-gtk.tar.gz -C /opt/eclipse
# create link
ln -s /opt/eclipse/eclipse/ /opt/eclipse/latest
# Permissions
hmod -R +r /opt/eclipse/

Create executable /usr/bin/eclipse
#!/bin/sh
# name it eclipse
# put it in /usr/bin
# chmod 755 /usr/bin/eclipse
export ECLIPSE_HOME="/opt/eclipse/latest"
$ECLIPSE_HOME/eclipse $*
view raw eclipse.sh hosted with ❤ by GitHub

Create Desktop Launcher
# create /usr/local/share/applications/eclipse.desktop
# Paste the following
[Desktop Entry]
Encoding=UTF-8
Name=Eclipse
Comment=Eclipse Luna 4.4.2
Exec=eclipse
Icon=/opt/eclipse/latest/icon.xpm
Terminal=false
Type=Application
Categories=Development;IDE;
StartupNotify=true
view raw eclipse.desktop hosted with ❤ by GitHub

See also http://www.if-not-true-then-false.com/2010/linux-install-eclipse-on-fedora-centos-red-hat-rhel/

Installing Maven

Download https://maven.apache.org/download.cgi

# root
su -
# installation location
mkdir /opt/maven
# unzip
tar -zxvf /path/.../apache-maven-3.3.1-bin.tar.gz -C /opt/maven
# link
ln -s /opt/maven/apache-maven-3.3.1/ /opt/maven/latest

Setting maven environment
# put it in /etc/profile.d
export M2_HOME=/opt/maven/latest
export M2=$M2_HOME/bin
export PATH=$M2:$PATH
view raw maven-env.sh hosted with ❤ by GitHub

Installing git

I wanted to have the latest git client.
Using yum install did not make it, so I decided to install from source code.
I found a great blog explaining how to do it.
http://tecadmin.net/install-git-2-0-on-centos-rhel-fedora/
Note: in the compile part, he uses export to /etc/bashrc .
Don’t do it. Instead create a file under /etc/profile.d
Installation commands

su -
yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel
yum install gcc perl-ExtUtils-MakeMaker
yum remove git
# Download source
# check latest version in http://git-scm.com/downloads
cd /usr/src
wget https://www.kernel.org/pub/software/scm/git/git-<latest-version&gt;.tar.gz
tar xzf git-<latest-version>.tar.gz
# create git from source code
cd git-<latest-version>
make prefix=/opt/git all
make prefix=/opt/git install
view raw install-git.sh hosted with ❤ by GitHub

git Environment
Create an ‘sh’ file under /etc/profile.d
# save under /etc/profile.d/git-env.sh
export PATH=$PATH:/opt/git/bin
view raw git-env.sh hosted with ❤ by GitHub

Linkedin Twitter facebook github

Java 8 Stream and Lambda Expressions – Parsing File Example

Recently I wanted to extract certain data from an output log.
Here’s part of the log file:

2015-01-06 11:33:03 b.s.d.task [INFO] Emitting: eVentToRequestsBolt __ack_ack [-6722594615019711369 -1335723027906100557]
2015-01-06 11:33:03 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package com.foo.bar
2015-01-06 11:33:04 b.s.d.executor [INFO] Processing received message source: eventToManageBolt:2, stream: __ack_ack, id: {}, [-6722594615019711369 -1335723027906100557]
2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package co.il.boo
2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package dot.org.biz

I decided to do it using the Java8 Stream and Lambda Expression features.

Read the file
First, I needed to read the log file and put the lines in a Stream:

Stream<String> lines = Files.lines(Paths.get(args[1]));

Filter relevant lines
I needed to get the packages names and write them into another file.
Not all lines contained the data I need, hence filter only relevant ones.

lines.filter(line -> line.contains("===---> Loaded package"))

Parsing the relevant lines
Then, I needed to parse the relevant lines.
I did it by first splitting each line to an array of Strings and then taking the last element in that array.
In other words, I did a double mapping. First a line to an array and then an array to a String.

.map(line -> line.split(" "))
.map(arr -> arr[arr.length - 1])

Writing to output file
The last part was taking each string and write it to a file. That was the terminal operation.

.forEach(packageName -> writeToFile(fw, packageName));

writeToFile is a method I created.
The reason is that Java File System throws IOException. You can’t use checked exceptions in lambda expressions.

Here’s a full example (note, I don’t check input)

import java.io.FileWriter;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Stream;

public class App {
	public static void main(String[] args) throws IOException {
		Stream<String> lines = null;
		if (args.length == 2) {
			lines = Files.lines(Paths.get(args[1]));
		} else {
			String s1 = "2015-01-06 11:33:03 b.s.d.task [INFO] Emitting: adEventToRequestsBolt __ack_ack [-6722594615019711369 -1335723027906100557]";
			String s2 = "2015-01-06 11:33:03 b.s.d.executor [INFO] Processing received message source: eventToManageBolt:2, stream: __ack_ack, id: {}, [-6722594615019711369 -1335723027906100557]";
			String s3 = "2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package com.foo.bar";
			String s4 = "2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package co.il.boo";
			String s5 = "2015-01-06 11:33:04 c.s.p.d.PackagesProvider [INFO] ===---> Loaded package dot.org.biz";
			List<String> rows = Arrays.asList(s1, s2, s3, s4, s5);
			lines = rows.stream();
		}
		
		new App().parse(lines, args[0]);

	}
	
	private void parse(Stream<String> lines, String output) throws IOException {
		final FileWriter fw = new FileWriter(output);
		
		//@formatter:off
		lines.filter(line -> line.contains("===---> Loaded package"))
		.map(line -> line.split(" "))
		.map(arr -> arr[arr.length - 1])
		.forEach(packageName-> writeToFile(fw, packageName));
		//@formatter:on
		fw.close();
		lines.close();
	}

	private void writeToFile(FileWriter fw, String packageName) {
		try {
			fw.write(String.format("%s%n", packageName));
		} catch (IOException e) {
			throw new RuntimeException(e);
		}
	}

}

(You can find more Java 8 features tutorial at: Java Code Geek – Java 8 Features Tutorial )
Linkedin Twitter facebook github

Playing With Java Concurrency

Recently I needed to transform some filet that each has a list (array) of objects in JSON format to files that each has separated lines of the same data (objects).

It was a one time task and simple one.
I did the reading and writing using some feature of Java nio.
I used GSON in the simplest way.
One thread runs over the files, converts and writes.

The whole operation finished in a few seconds.

However, I wanted to play a little bit with concurrency.
So I enhanced the tool to work concurrently:

Threads
Runnable for reading file.
The reader threads are submitted to ExecutorService.
The output, which is a list of objects (User in the example), will be put in a BlockingQueue.

Runnable for writing file.
Each runnable will poll from the blocking queue.
It will write lines of data to a file.
I don’t add the writer Runnable to the ExecutorService, but instead just start a thread with it.
The runnable has a while(some boolen is true) {...} pattern.
More about that below…

Synchronizing Everything
BlockingQueue is the interface of both types of threads.

As the writer runnable runs in a while loop (consumer), I wanted to be able to make it stop so the tool will terminate.
So I used two objects for that:

Semaphore
The loop that reads the input files increments a counter.
Once I finished traversing the input files and submitted the writers, I initialized a semaphore in the main thread:
semaphore.acquire(numberOfFiles);

In each reader runable, I released the semaphore:
semaphore.release();

AtomicBoolean
The while loop of the writers uses an AtomicBoolean.
As long as AtomicBoolean==true, the writer will continue.

In the main thread, just after the acquire of the semaphore, I set the AtomicBoolean to false.
This enables the writer threads to terminate.

Using Java NIO
In order to scan, read and write the file system, I used some features of Java NIO.

Scanning: Files.newDirectoryStream(inputFilesDirectory, "*.json");
Deleting output directory before starting: Files.walkFileTree...
BufferedReader and BufferedWriter: Files.newBufferedReader(filePath); Files.newBufferedWriter(fileOutputPath, Charset.defaultCharset());

One note. In order to generate random files for this example, I used apache commons lang: RandomStringUtils.randomAlphabetic
All code in GitHub.

public class JsonArrayToJsonLines {
	private final static Path inputFilesDirectory = Paths.get("src\\main\\resources\\files");
	private final static Path outputDirectory = Paths
			.get("src\\main\\resources\\files\\output");
	private final static Gson gson = new Gson();
	
	private final BlockingQueue<EntitiesData> entitiesQueue = new LinkedBlockingQueue<>();
	
	private AtomicBoolean stillWorking = new AtomicBoolean(true);
	private Semaphore semaphore = new Semaphore(0);
	int numberOfFiles = 0;

	private JsonArrayToJsonLines() {
	}

	public static void main(String[] args) throws IOException, InterruptedException {
		new JsonArrayToJsonLines().process();
	}

	private void process() throws IOException, InterruptedException {
		deleteFilesInOutputDir();
		final ExecutorService executorService = createExecutorService();
		DirectoryStream<Path> directoryStream = Files.newDirectoryStream(inputFilesDirectory, "*.json");
		
		for (int i = 0; i < 2; i++) {
			new Thread(new JsonElementsFileWriter(stillWorking, semaphore, entitiesQueue)).start();
		}

		directoryStream.forEach(new Consumer<Path>() {
			@Override
			public void accept(Path filePath) {
				numberOfFiles++;
				executorService.submit(new OriginalFileReader(filePath, entitiesQueue));
			}
		});
		
		semaphore.acquire(numberOfFiles);
		stillWorking.set(false);
		shutDownExecutor(executorService);
	}

	private void deleteFilesInOutputDir() throws IOException {
		Files.walkFileTree(outputDirectory, new SimpleFileVisitor<Path>() {
			@Override
			public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
				Files.delete(file);
				return FileVisitResult.CONTINUE;
			}
		});
	}

	private ExecutorService createExecutorService() {
		int numberOfCpus = Runtime.getRuntime().availableProcessors();
		return Executors.newFixedThreadPool(numberOfCpus);
	}

	private void shutDownExecutor(final ExecutorService executorService) {
		executorService.shutdown();
		try {
			if (!executorService.awaitTermination(120, TimeUnit.SECONDS)) {
				executorService.shutdownNow();
			}

			if (!executorService.awaitTermination(120, TimeUnit.SECONDS)) {
			}
		} catch (InterruptedException ex) {
			executorService.shutdownNow();
			Thread.currentThread().interrupt();
		}
	}


	private static final class OriginalFileReader implements Runnable {
		private final Path filePath;
		private final BlockingQueue<EntitiesData> entitiesQueue;

		private OriginalFileReader(Path filePath, BlockingQueue<EntitiesData> entitiesQueue) {
			this.filePath = filePath;
			this.entitiesQueue = entitiesQueue;
		}

		@Override
		public void run() {
			Path fileName = filePath.getFileName();
			try {
				BufferedReader br = Files.newBufferedReader(filePath);
				User[] entities = gson.fromJson(br, User[].class);
				System.out.println("---> " + fileName);
				entitiesQueue.put(new EntitiesData(fileName.toString(), entities));
			} catch (IOException | InterruptedException e) {
				throw new RuntimeException(filePath.toString(), e);
			}
		}
	}

	private static final class JsonElementsFileWriter implements Runnable {
		private final BlockingQueue<EntitiesData> entitiesQueue;
		private final AtomicBoolean stillWorking;
		private final Semaphore semaphore;

		private JsonElementsFileWriter(AtomicBoolean stillWorking, Semaphore semaphore,
				BlockingQueue<EntitiesData> entitiesQueue) {
			this.stillWorking = stillWorking;
			this.semaphore = semaphore;
			this.entitiesQueue = entitiesQueue;
		}

		@Override
		public void run() {
			while (stillWorking.get()) {
				try {
					EntitiesData data = entitiesQueue.poll(100, TimeUnit.MILLISECONDS);
					if (data != null) {
						try {
							String fileOutput = outputDirectory.toString() + File.separator + data.fileName;
							Path fileOutputPath = Paths.get(fileOutput);
							BufferedWriter writer = Files.newBufferedWriter(fileOutputPath, Charset.defaultCharset());
							for (User user : data.entities) {
								writer.append(gson.toJson(user));
								writer.newLine();
							}
							writer.flush();
							System.out.println("=======================================>>>>> " + data.fileName);
						} catch (IOException e) {
							throw new RuntimeException(data.fileName, e);
						} finally {
							semaphore.release();
						}
					}
				} catch (InterruptedException e1) {
				}
			}
		}
	}

	private static final class EntitiesData {
		private final String fileName;
		private final User[] entities;

		private EntitiesData(String fileName, User[] entities) {
			this.fileName = fileName;
			this.entities = entities;
		}
	}
}

Linkedin Twitter facebook github

Using Groovy for Bash (shell) Operations

Recently I needed to create a groovy script that deletes some directories in a Linux machine.
Here’s why:
1.
We have a server for doing scheduled jobs.
Jobs such as ETL from one DB to another, File to DB etc.
The server activates clients, which are located in the machines we want to have action on them.
Most (almost all) of the jobs are written in groovy scripts.

2.
Part of our CI process is deploying a WAR into a dedicated server.
Then, we have a script that among other things uses soft-link to direct ‘webapps’ to the newly created directory.
This deployment happens once an hour, which fills up the dedicated server quickly.

So I needed to create a script that checks all directories in the correct location and deletes old ones.
I decided to keep the latest 4 directories.
It’s currently a magic number in the script. If I want / need I can make it as an input parameter. But I decided to start simple.

I decided to do it very simple:
1. List all directories with prefix webapp_ in a known location
2. Sort them by time, descending, and run delete on all starting index 4.

def numberOfDirectoriesToKeep = 4
def webappsDir = new File('/usr/local/tomcat/tomcat_aps')
def webDirectories = webappsDir.listFiles().grep(~/.*webapps_.*/)
def numberOfWeappsDirectories = webDirectories.size();

if (numberOfWeappsDirectories >= numberOfDirectoriesToKeep) {
  webDirectories.sort{it.lastModified() }.reverse()[numberOfDirectoriesToKeep..numberOfWeappsDirectories-1].each {
    logger.info("Deleteing ${it}");
    // here we'll delete the file. First try was doing a Java/groovy command of deleting directories
  }
} else {
  logger.info("Too few web directories")
}

It didn’t work.
Files were not deleted.
It happened that the agent runs as a different user than the one that runs tomcat.
The agent did not have permissions to remove the directories.

My solution was to run a shell command with sudo.

I found references at:
http://www.joergm.com/2010/09/executing-shell-commands-in-groovy/
and
http://groovy.codehaus.org/Executing+External+Processes+From+Groovy

To make a long story short, here’s the full script:

import org.slf4j.Logger
import com.my.ProcessingJobResult
def Logger logger = jobLogger
//ProcessingJobResult is proprietary
def ProcessingJobResult result = jobResult
try {
logger.info("Deleting old webapps from CI - START")
def numberOfDirectoriesToKeep = 4 // Can be externalized to input parameter
def webappsDir = new File('/usr/local/tomcat/tomcat_aps')
def webDirectories = webappsDir.listFiles().grep(~/.*webapps_.*/)
def numberOfWeappsDirectories = webDirectories.size();
if (numberOfWeappsDirectories >= numberOfDirectoriesToKeep) {
webDirectories.sort{it.lastModified() }.reverse()[numberOfDirectoriesToKeep..numberOfWeappsDirectories-1].each {
logger.info("Deleteing ${it}");
def deleteCommand = "sudo -u tomcat rm -rf " + it.toString();
deleteCommand.execute();
}
} else {
logger.info("Too few web directories")
}
result.status = Boolean.TRUE
result.resultDescription = "Deleting old webapps from CI ended"
logger.info("Deleting old webapps from CI - DONE")
} catch (Exception e) {
logger.error(e.message, e)
result.status = Boolean.FALSE
result.resultError = e.message
}
return result

BTW,
There’s a minor bug of indexes, which I decided not to fix (now), as we always have more directories.

Linkedin Twitter facebook github

JUnit Rules

Introduction
In this post I would like to show an example of how to use JUnit Rule to make testing easier.

Recently I inherited a rather complex system, which not everything is tested. And even the tested code is complex.
Mostly I see lack of test isolation.
(I will write a different blog about working with Legacy Code).

One of the test (and code) I am fixing actually tests several components together.
It also connect to the DB. It tests some logic and intersection between components.
When the code did not compile in a totally different location, the test could not run because it loaded all Spring context.
The structure was that before testing (any class) all Spring context was initiated.
The tests extend BaseTest, which loads all Spring context.

BaseTest also cleans the DB in the @After method.

Important note: This article is about changing tests, which are not structured entirely correct.
When creating new code and tests they should be isolated, testi one thing etc.
Better tests should use mock DB / dependencies etc.
After I fix the test and refactor, I’ll have confidence making more changes.

Back to our topic…
So, what I got is slow run of the test suit, no isolation and even problem running tests due to unrelated problems.

So I decided separating the context loading with DB connection and both of them from the cleaning up of the database.

Approach
In order to achieve that I did three things:
The first was to change inheritance of the test class.
It stopped inheriting BaseTest.
Instead it inherits AbstractJUnit4SpringContextTests
Now I can create my own context per test and not load everything.

Now I needed two rules, a @ClassRule and @Rule
@ClassRule will be responsible for DB connection
@Rule will cleanup the DB after / before each test

But first, what are JUnit Rules?
A short explanation would be that they provide a possibility to intercept test method, similar to AOP concept.
@Rule allows us to intercept method before and after the actual run of the method.
@ClassRule intercepts test class run.
A very known @Rule is JUnit’s TemporaryFolder.

(Similar to @Before, @After and @BeforeClass).

Creating @Rule
The easy part was to create a Rule that cleanup the DB before and after a test method.
You need to implement TestRule, which has one method: Statement apply(Statement base, Description description);
You can do a-lot with it.
I found out that usually I will have an inner class that extends Statement.
The rule I created did not create the DB connection, but got it in the constructor.

Here’s the full code:

public class DbCleanupRule implements TestRule {
private final DbConnectionManager connection;
public DbCleanupRule(DbConnectionManager connection) {
this.connection = connection;
}
@Override
public Statement apply(Statement base, Description description) {
return new DbCleanupStatement(base, connection);
}
private static final class DbCleanupStatement extends Statement {
private final Statement base;
private final DbConnectionManager connection;
private DbCleanupStatement(Statement base, DbConnectionManager connection) {
this.base = base;
this.connection = connection;
}
@Override
public void evaluate() throws Throwable {
try {
cleanDb();
base.evaluate();
} finally {
cleanDb();
}
}
private void cleanDb() {
connection.doTheCleanup();
}
}
}

Creating @ClassRule
ClassRule is actually also TestRule.
The only difference from Rule is how we use it in our test code.
I’ll show it below.

The challenge in creating this rule was that I wanted to use Spring context to get the correct connection.
Here’s the code:
(ExternalResource is TestRule)

public class DbConnectionRule extends ExternalResource {
private DbConnectionManager connection;
public DbConnectionRule() {
}
@Override
protected void before() throws Throwable {
ClassPathXmlApplicationContext ctx = null;
try {
ctx = new ClassPathXmlApplicationContext("/META-INF/my-db-connection-TEST-ctx.xml");
mongoDb = (DbConnectionManager) ctx.getBean("myDbConnection");
} finally {
if (ctx != null) {
ctx.close();
}
}
}
@Override
protected void after() {
}
public DbConnectionManager getDbConnecttion() {
return connection;
}
}

(Did you see that I could make DbCleanupRule inherit ExternalResource?)

Using it
The last part is how we use the rules.
A @Rule must be public field.
A @ClassRule must be public static field.

And there it is:

@ContextConfiguration(locations = { "/META-INF/one-dao-TEST-ctx.xml", "/META-INF/two-TEST-ctx.xml" })
public class ExampleDaoTest extends AbstractJUnit4SpringContextTests {
@ClassRule
public static DbCleanupRule connectionRule = new DbCleanupRule ();
@Rule
public DbCleanupRule dbCleanupRule = new DbCleanupRule(connectionRule.getDbConnecttion());
@Autowired
private ExampleDao classToTest;
@Test
public void foo() {
}
}

That’s all.
Hope it helps.

Eyal

[Edit]
I got some good remarks from Logan Mzz at DZone: http://java.dzone.com/articles/junit-rules#comment-125673

  1. Link to Junit Rules: https://github.com/junit-team/junit/wiki/Rules
  2. There’s ErrorCollector rule, which avoids annoying test-fail-fix cycles for a single test.
  3. And RuleChain, which described in the comment

Linkedin Twitter facebook github

RSS Reader Using: ROME, Spring MVC, Embedded Jetty

In this post I will show some guidlines to create a Spring web application, running it using Jetty and using an external library called ROME for RSS reading.

General

I have recently created a sample web application that acts as an RSS reader.
I wanted to examine ROME for RSS reading.
I also wanted to create the application using Spring container and MVC for the simplest view.
For rapid development, I used Jetty as the server, using a simple java class for it.
All the code can be found at GitHub, eyalgo/rss-reader.

Content

  1. Maven Dependencies
  2. Jetty Server
  3. Spring Dependency
  4. Spring MVC
  5. ROME

Maven Dependencies

At first, I could not get the correct Jetty version to use.
There is one with group-id mortby, and another by eclipse.
After some careful examination and trial and error, I took the eclipse’s library.
Spring is just standard.
I found ROME with newest version under GutHub. It’s still a SNAPSHOT.

Here’s the list of the dependencies:

  • Spring
  • jetty
  • rome and rome-fetcher
  • logback and slf4j
  • For Testing
    • Junit
    • mockito
    • hamcrest
    • spring-test

The project’s pom file can be found at: https://github.com/eyalgo/rss-reader/blob/master/pom.xml

Jetty Server

A few years ago I’ve been working using Wicket framework and got to know Jetty, and its easy usage for creating a server.
I decided to go in that direction and to skip the standard web server running with WAR deployment.

There are several ways to create the Jetty server.
I decided to create the server, using a web application context.

First, create the context:

private WebAppContext createContext() {
  WebAppContext webAppContext = new WebAppContext();
  webAppContext.setContextPath("/");
  webAppContext.setWar(WEB_APP_ROOT);
  return webAppContext;
}

Then, create the server and add the context as handler:

  Server server = new Server(port);
  server.setHandler(webAppContext);

Finally, start the server:

  try {
    server.start();
  } catch (Exception e) {
    LOGGER.error("Failed to start server", e);
    throw new RuntimeException();
  }

Everything is under https://github.com/eyalgo/rss-reader/tree/master/src/test/java/com/eyalgo/rssreader/server

Spring Project Structure

RSS Reader Project Structure

RSS Reader Project Structure

Spring Dependency

In web.xml I am declaring application-context.xml and web-context.xml .
In web-context.xml , I am telling Spring were to scan for components:
<context:component-scan base-package="com.eyalgo.rssreader"/>
In application-context.xml I am adding a bean, which is an external class and therefore I can’t scan it (use annotations):
<bean id="fetcher" class="org.rometools.fetcher.impl.HttpURLFeedFetcher"/>

Besides scanning, I am adding the correct annotation in the correct classes.
@Repository
@Service
@Controller

@Autowired

Spring MVC

In order to have some basic view of the RSS feeds (and atoms), I used a simple MVC and JSP pages.
To create a controller, I needed to add @Controller for the class.
I added @RequestMapping("/rss") so all requests should be prefixed with rss.

Each method has a @RequestMapping declaration. I decided that everything is GET.

Adding a Parameter to the Request

Just add @RequestParam("feedUrl") before the parameter of the method.

Redirecting a Request

After adding an RSS location, I wanted to redirect the answer to show all current RSS items.
So the method for adding an RSS feed needed to return a String.
The returned value is: “redirect:all”.

  @RequestMapping(value = "feed", method = RequestMethod.GET)
  public String addFeed(@RequestParam("feedUrl") String feedUrl) {
    feedReciever.addFeed(feedUrl);
    return "redirect:all";
  }

Return a ModelAndView Class

In Spring MVC, when a method returns a String, the framework looks for a JSP page with that name.
If there is none, then we’ll get an error.
(If you want to return just the String, you can add @ResponseBody to the method.)

In order to use ModelAndView, you need to create one with a name:
ModelAndView modelAndView = new ModelAndView("rssItems");
The name will tell Spring MVC which JSP to refer to.
In this example, it will look for rssItems.jsp.

Then you can add to the ModelAndView “objects”:

  List<FeedItem> items = itemsRetriever.get();
  ModelAndView modelAndView = new ModelAndView("rssItems");
  modelAndView.addObject("items", items);

In the JSP page, you need to refer the names of the objects you added.
And then, you can access their properties.
So in this example, we’ll have the following in rssItems.jsp:

  <c:forEach items="${items}" var="item">
    <div>
      <a href="${item.link}" target="_blank">${item.title}</a><br>
        ${item.publishedDate}
    </div>
  </c:forEach>

Note
Spring “knows” to add jsp as a suffix to the ModelAndView name because I declared it in web-context.xml.
In the bean of class: org.springframework.web.servlet.view.InternalResourceViewResolver.
By setting the prefix this bean also tells Spring were to look for the jsp pages.
Please look:
https://github.com/eyalgo/rss-reader/blob/master/src/main/java/com/eyalgo/rssreader/web/RssController.java
https://github.com/eyalgo/rss-reader/blob/master/src/main/webapp/WEB-INF/views/rssItems.jsp

Error Handling

There are several ways to handle errors in Spring MVC.
I chose a generic way, in which for any error, a general error page will be shown.

First, add @ControllerAdvice to the class you want to handle errors.

Second, create a method per type of exception you want to catch.
You need to annotate the method with @ExceptionHandler. The parameter tells which exception this method will handle.

You can have a method for IllegalArgumentException and another for different exception and so on.

The return value can be anything and it will act as normal controller. That means, having a jsp (for example) with the name of the object the method returns.

In this example, the method catches all exception and activates error.jsp, adding the message to the page.

  @ExceptionHandler(Exception.class)
  public ModelAndView handleAllException(Exception e) {
    ModelAndView model = new ModelAndView("error");
    model.addObject("message", e.getMessage());
    return model;
  }

ROME

ROME is an easy to use library for handling RSS feeds.
https://github.com/rometools/rome
rome-fetcher is an additional library that helps getting (fetching) RSS feeds from external sources, such as HTTP, or URL.
https://github.com/rometools/rome-fetcher

As of now, the latest build is 2.0.0-SNAPSHOT

An example on how to read an input RSS XML file can be found at:
https://github.com/eyalgo/rss-reader/blob/master/src/test/java/com/eyalgo/rssreader/runners/MetadataFeedRunner.java

To make life easier, I used rome-fetcher.
It gives you the ability to give a URL (RSS feed) and have all the SyndFeed out of it.

If you want, you can add caching, so it won’t download cached items (items that were already downloaded).
All you need is to create the fetcher with FeedFetcherCache parameter in the constructor.

Usage:

  @Override
  public List<FeedItem> extractItems(String feedUrl) {
    try {
      List<FeedItem> result = Lists.newLinkedList();
      URL url = new URL(feedUrl);
      SyndFeed feed = fetcher.retrieveFeed(url);
      List<SyndEntry> entries = feed.getEntries();
      for (SyndEntry entry : entries) {
        result.add(new FeedItem(entry.getTitle(), entry.getLink(), entry.getPublishedDate()));
      }
      return result;
    } catch (IllegalArgumentException | IOException | FeedException | FetcherException e) {
      throw new RuntimeException("Error getting feed from " + feedUrl, e);
    }
}

https://github.com/eyalgo/rss-reader/blob/master/src/main/java/com/eyalgo/rssreader/service/rome/RomeItemsExtractor.java

Note
If you get a warning message (looks as System.out) that tells that fetcher.properties is missing, just add an empty file under resources (or in the root of the classpath).

Summary

This post covered several topics.
You can also have a look at the way a lot of the code is tested.
Check Matchers and mocks.

If you have any remarks, please drop a note.

Eyal

Linkedin Twitter facebook github

Why Abstraction is Really Important

Abstraction
Abstraction is one of the key elements of good software design.
It helps encapsulate behavior. It helps decouple software elements. It helps having more self-contained modules. And much more.

Abstraction makes the application extendable in much easier way. It makes refactoring much easier.
When developing with higher level of abstraction, you communicate the behavior and less the implementation.

General
In this post, I want to introduce a simple scenario that shows how, by choosing a simple solution, we can get into a situation of hard coupling and rigid design.

Then I will briefly describe how we can avoid situation like this.

Case study description
Let’s assume that we have a domain object called RawItem.

public class RawItem {
    private final String originator;
    private final String department;
    private final String division;
    private final Object[] moreParameters;
    
    public RawItem(String originator, String department, String division, Object... moreParameters) {
        this.originator = originator;
        this.department = department;
        this.division = division;
        this.moreParameters = moreParameters;
    }
}

The three first parameters represent the item’s key.
I.e. An item comes from an originator, a department and a division.
The “moreParameters” is just to emphasize the item has more parameters.

This triplet has two basic usages:
1. As key to store in the DB
2. As key in maps (key to RawItem)

Storing in DB based on the key
The DB tables are sharded in order to evenly distribute the items.
Sharding is done by a hash key modulo function.
This function works on a string.

Suppose we have N shards tables: (RAW_ITEM_REPOSITORY_00, RAW_ITEM_REPOSITORY_01,..,RAW_ITEM_REPOSITORY_NN),
then we’ll distribute the items based on some function and modulo:

String rawKey = originator + "_"  + department + "_" + division;
// func is String -> Integer function, N = # of shards
// Representation of the key is described below
int shard = func(key)%N;

Using the key in maps
The second usage for the triplet is mapping the items for fast lookup.
So, when NOT using abstraction, the maps will usually look like:

Map<String, RawItem> mapOfItems = new HashMap<>();
// Fill the map...

“Improving” the class
We see that we have common usage for the key as string, so we decide to put the string representation in the RawItem.

// new member
private final String key;

// in the constructor:
this.key = this.originator + "_" + this.department + "_"  + this.division;

// and a getter
public String getKey() {
  return key;
}

Assessment of the design
There are two flows here:
1. Coupling between the sharding distribution and the items’ mapping
2. The mapping key is strict. any change forces change in the key, which might introduce hard to find bugs

And then comes a new requirement
Up until now, the triplet: originator, department and division made up a key of an item.
But now, a new requirement comes in.
A division can have subdivision.
It means that, unlike before, we can have two different items from the same triplet. The items will differ by the subdivision attribute.

Difficult to change
Regarding the DB distribution, we’ll need to keep the concatenated key of the triplet.
We must keep the modulo function the same. So distribution will remain using the triplets, but the schema will change and hava ‘subdivision’ column as well.
We’ll change the queries to use the subdivision together with original key.

In regard to the mapping, we’ll need to do a massive refactoring and to pass an ItemKey (see below) instead of just String.

Abstraction of the key
Let’s create ItemKey

public class ItemKey {
    private final String originator;
    private final String department;
    private final String division;
    private final String subdivision;

    public ItemKey(String originator, String department, String division, String subdivision) {
        this.originator = originator;
        this.department = department;
        this.division = division;
        this.subdivision = subdivision;
    }

    public String asDistribution() {
        return this.originator + "_" + this.department + "_"  + this.division;
    }
}

And,

Map<ItemKey, RawItem> mapOfItems = new HashMap<>();
// Fill the map...
    // new constructor for RawItem
    public RawItem(ItemKey itemKey, Object... moreParameters) {
        // fill the fields
    }

Lesson Learned and conclusion
I wanted to show how a simple decision can really hurt.

And, how, by a small change, we made the key abstract.
In the future the key can have even more fields, but we’ll need to change only the inner implementation of it.
The logic and mapping usage should not be changed.

Regarding the change process,
I haven’t described how to do the refactoring, as it really depends on how the code looks like and how much is it tested.
In our case, some parts were easy, while others were really hard. The hard parts were around code that was looking deep in the implementation of the key (string) and the item.

This situation was real
We actually had this flow in our design.
Everything was fine for two years, until we had to change the key (add the subdivision).
Luckily all of our code is tested so we could see what breaks and fix it.
But it was painful.

There are two abstraction that we could have initially implement:
1. The more obvious is using a KEY class (as describe above). Even if it only has one String field
2. Any map usage need to be examined whether we’ll benefit by hiding it using abstraction

The second abstraction is harder to grasp and to fully understand and implement.

So,
do abstraction, tell a story and use the interfaces and don’t get into details while telling it.

Linkedin Twitter facebook github

Law of Demeter

Reduce coupling and improve encapsulation…

General
In this post I want to go over Law of Demeter (LoD).
I find this topic an extremely important for having the code clean, well-designed and maintainable.

In my experience, seeing it broken is a huge smell for bad design.
Following the law, or refactoring based on it, leads to much improved, readable and more maintainable code.

So what is Law of Demeter?
I will start by mentioning the 4 basic rules:

Law of Demeter says that a method M of object O can access / invoke methods of:

  1. O itself
  2. M’s input arguments
  3. Any object created in M
  4. O’s parameters / dependencies

These are fairly simple rules.

Let’s put this in other words:
Each unit (method) should have limited knowledge about other units.

Metaphors
The most common one is: Don’t talk to strangers

How about this:
Suppose I buy something at 7-11.
When I need to pay, will I give my wallet to the clerk so she will open it and get the money out?
Or will I give her the money directly?

How about this metaphor:
When you take your dog out for a walk, do you tell it to walk or its legs?

Why do we want to follow this rule?

  • We can change a class without having a ripple effect of changing many others.
  • We can change called methods without changing anything else.
  • Using LoD makes our tests much easier to construct. We don’t need to write so many ‘when‘ for mocks that return and return and return.
  • It improves the encapsulation and abstraction (I’ll show in the example below).
    But basically, we hide “how things work”.
  • It makes our code less coupled. A caller method is coupled only in one object, and not all of the inner dependencies.
  • It will usually model better the real world.
    Take as an example the wallet and payment.

Counting Dots?
Although usually many dots imply LoD violation, sometimes it doesn’t make sense to “merge the dots”.
Does:
getEmployee().getChildren().getBirthdays()
suggest that we do something like:
getEmployeeChildrenBirthdays() ?
I am not entirely sure.

Too Many Wrapper Classes
This is another outcome of trying to avoid LoD.
In this particular situation, I strongly believe that it’s another design smell which should be taken care of.

As always, we must have common sense while coding, cleaning and / or refactoring.

Example
Suppose we have a class: Item
The item can hold multiple attributes.
Each attribute has a name and values (it’s a multiple value attribute)

The simplest implementations would be using Map.

public class Item {
private final Map<String, Set<String>> attributes;
public Item(Map<String, Set<String>> attributes) {
this.attributes = attributes;
}
public Map<String, Set<String>> getAttributes() {
return attributes;
}
}

Let’s have a class ItemsSaver that uses the Item and attributes:
(please ignore the unstructured methods. This is an example for LoD, not SRP 🙂 )

public class ItemSaver {
private String valueToSave;
public ItemSaver(String valueToSave) {
this.valueToSave = valueToSave;
}
public void doSomething(String attributeName, Item item) {
Set<String> attributeValues = item.getAttributes().get(attributeName);
for (String value : attributeValues) {
if (value.equals(valueToSave)) {
doSomethingElse();
}
}
}
private void doSomethingElse() {
}
}

Suppose I know that it’s a single value (from the context of the application).
And I want to take it. Then the code would look like:
Set<String> attributeValues = item.getAttributes().get(attributeName);
String singleValue = attributeValues.iterator().next();
// String singleValue = item.getAttributes().get(attributeName).iterator().next();

I think that it is clear to see that we’re having a problem.
Wherever we use the attributes of the Item, we know how it works. We know the inner implementation of it.
It also makes our test much harder to maintain.

Let’s see an example of a test using mock (Mockito):
You can see imagine how much effort it should take to change and maintain it.

Item item = mock(Item.class);
Map<String, Set<String>> attributes = mock(Map.class);
Set<String> values = mock(Set.class);
Iterator<String> iterator = mock(Iterator.class);
when(iterator.next()).thenReturn("the single value");
when(values.iterator()).thenReturn(iterator);
when(attributes.containsKey("the-key")).thenReturn(true);
when(attributes.get("the-key")).thenReturn(values);
when(item.getAttributes()).thenReturn(attributes);

We can use real Item instead of mocking, but we’ll still need to create lots of pre-test data.

Let’s recap:

  • We exposed the inner implementation of how Item holds Attributes
  • In order to use attributes, we needed to ask the item and then to ask for inner objects (the values).
  • If we ever want to change the attributes implementation, we will need to make changes in the classes that use Item and the attributes. Probably a-lot classes.
  • Constructing the test is tedious, cumbersome, error-prone and lots of maintenance.

Improvement
The first improvement would be to ask let Item delegate the attributes.

public class Item {
private final Map<String, Set<String>> attributes;
public Item(Map<String, Set<String>> attributes) {
this.attributes = attributes;
}
public boolean attributeExists(String attributeName) {
return attributes.containsKey(attributeName);
}
public Set<String> values(String attributeName) {
return attributes.get(attributeName);
}
public String getSingleValue(String attributeName) {
return values(attributeName).iterator().next();
}
}

And the test becomes much simpler.
Item item = mock(Item.class);
when(item.getSingleValue("the-key")).thenReturn("the single value");

We are (almost) hiding totally the implementation of attributes from other classes.
The client classes are not aware of the implementation expect two cases:

  1. Item still knows how attributes are built.
  2. The class that creates Item (whichever it is), also knows the implementation of attributes.

The two points above mean that if we change the implementation of Attributes (something else than a map), at least two other classes will need to be change. This is a great example for High Coupling.

The Next Step Improvement
The solution above will sometimes (usually?) be enough.
As pragmatic programmers, we need to know when to stop.
However, let’s see how we can even improve the first solution.

Create a class Attributes:

public class Attributes {
private final Map<String, Set<String>> attributes;
public Attributes() {
this.attributes = new HashMap<>();
}
public boolean attributeExists(String attributeName) {
return attributes.containsKey(attributeName);
}
public Set<String> values(String attributeName) {
return attributes.get(attributeName);
}
public String getSingleValue(String attributeName) {
return values(attributeName).iterator().next();
}
public Attributes addAttribute(String attributeName, Collection<String> values) {
this.attributes.put(attributeName, new HashSet<>(values));
return this;
}
}
view raw Attributes.java hosted with ❤ by GitHub

And the Item that uses it:
public class Item {
private final Attributes attributes;
public Item(Attributes attributes) {
this.attributes = attributes;
}
public boolean attributeExists(String attributeName) {
return attributes.attributeExists(attributeName);
}
public Set<String> values(String attributeName) {
return attributes.values(attributeName);
}
public String getSingleValue(String attributeName) {
return attributes.getSingleValue(attributeName);
}
}

(Did you noticed? The implementation of attributes inside item was changed, but the test did not need to. This is thanks to the small change of delegation.)

In the second solution we improved the encapsulation of Attributes.
Now even Item does not know how it works.
We can change the implementation of Attributes without touching any other class.
We can make different implementations of Attributes:
– An implementation that holds a Set of values (as in the example).
– An implementation that holds a List of values.
– A totally different data structure that we can think of.

As long as all of our tests pass, we can be sure that everything is OK.

What did we get?

  • The code is much more maintainable.
  • Tests are simpler and more maintainable.
  • It is much more flexible. We can change implementation of Attributes (map, set, list, whatever we choose).
  • Changes in Attribute does not affect any other part of the code. Not even those who directly uses it.
  • Modularization and code reuse. We can use Attributes class in other places in the code.

Project Migration from Sourceforge to GitHub

I have an old project, named JVDrums, which was located at Sourceforge.
http://sourceforge.net/projects/jvdrums/

About JVDrums
It was written around 6 years ago (This is the date as shown in the commit history: 2008-05-09).

The project is a MIDI client for Roland Electronic Drums for uploading and backing up drumsets.
It was an early attempt to use testing during development (an early TDD attempt).

I used TestNG for the testing.

Initially I created it for my own model, which is Roland TD-12. I needed a small app for uploading drumsets which other users created and sent me.
When I published it in some forums I was asked to develop the client for other models (TD-6, TD-10).

That was cool, as I didn’t have the real module (each model has it’s own module), so how could I develop and test for it?

Each module has MIDI specification, so I downloaded them from Roland’s website.
Then, I created tests that simulated the structure of the MIDI file and I could hack the upload, download and editing.

I also created a basic UI interface using Java-Swing.

Migration
All i needed to do was following the instructions from:
https://github.com/nirvdrum/svn2git#readme

And here we go: https://github.com/eyalgo/jvdrums

So if you need to migrate from Sourceforge to GitHub just follow that link.

Using Reflection for Testing

I am working on a presentation about the ‘Single Responsibility Principle’, based on my previous post.
It take most of my time.

In the meantime, I want to share a sample code of how I use to test inner fields in my classes.
I am doing it for a special case of testing, which is more of an integration test.
In the standard unit testing of the dependent class, I am using mocks of the dependencies.

The Facts

  1. All of the fields (and dependencies in our classes are private
  2. The class do not have getters for its dependencies
  3. We wire things up using Spring (XML context)
  4. I wan to verify that dependency interface A is wired correctly to dependent class B

One approach would be to wire everything and then run some kind of integration test of the logic.
I don’t want to do this. It will make the test hard to maintain.

The other approach is to check wiring directly.
And for that I am using reflection.

Below is a sample code of the testing method, and the usage.
Notice how I am catching the exception and throws a RuntimeException in case there is a problem.
This way, I have cleaner tested code.


// Somewhere in a different utility class for testing
@SuppressWarnings("unchecked")
public static <T> T realObjectFromField(Class<?> clazz, String fieldName, Object object) {
Field declaredField = accessibleField(clazz, fieldName);
try {
return (T) declaredField.get(object);
} catch (IllegalArgumentException | IllegalAccessException e) {
throw new RuntimeException(e);
}
}
private static Field accessibleField(Class<?> clazz, String fieldName) {
try {
Field declaredField = clazz.getDeclaredField(fieldName);
declaredField.setAccessible(true);
return declaredField;
} catch (NoSuchFieldException | SecurityException e) {
throw new RuntimeException(e);
}
}
// This is how we use it in a test method
import static mypackage.ReflectionUtils.realObjectFromField;
ItemFiltersMapperByFlag mapper = realObjectFromField(ItemsFilterExecutor.class, "filtersMapper", filterExecutor);
assertNotNull("mapper is null. Check wiring", mapper);

Spring Context with Properties, Collections and Maps

In this post I want to show how I added the XML context file to the Spring application.
The second aspect I will show will be the usage of the properties file for the external constants values.

All of the code is located at: https://github.com/eyalgo/request-validation (as previous posts).

I decided to do all the wiring using XML file and not annotation for several reasons:

  1. I am simulating a situation were the framework is not part of the codebase (it’s an external library) and it is not annotated by anything
  2. I want to emphasize the modularity of the system using several XML files (yes. I know it can be done using @Configuration)
  3. Although I know Spring, I still feel more comfortable having more control using the XML files
  4. For Spring newbies, I think they should start using XML configuration files and only when grasp the idea and technology, should start using annotation

About the modularization and how the sample app is constructed, I will expand in later post.

Let’s start with the properties file:
And here’s part of the properties file:

flag.external = EXTERNAL
flag.internal = INTERNAL
flag.even = EVEN
flag.odd = ODD

validation.acceptedIds=flow1,flow2,flow3,flow4,flow5

filter.external.name.max = 10
filter.external.name.min = 4

filter.internal.name.max = 6
filter.internal.name.min = 2

Properties File Location
We also need to tell Spring the location of our property file.
You can use PropertyPlaceholderConfigurer , or you can use the context element as shown here:

<context:property-placeholder location="classpath:spring/flow.properties" />

Simple Bean Example
This is the very basic example of how to create a bean

<bean id="evenIdFilter"
  class="org.eyal.requestvalidation.flow.example.flow.itemsfilter.filters.EvenIdFilter">
</bean>

Using Simple Property
Suppose you want to add a property attribute to your bean.
I always use constructor injection, so I will use constructor-arg in the bean declaration.

<bean id="longNameExternalFilter"
    class="org.eyal.requestvalidation.flow.example.flow.itemsfilter.filters.NameTooLongFilter">
    <constructor-arg value="${filter.external.name.max}" />
</bean>

List Example
Suppose you have a class that gets a list (or set) of objects (either another bean class, or just Strings).
You can add it as a parameter in the constructor-arg, but I prefer to create the list outside the bean declaration and refer to it in the bean.
Here’s how:

<util:list id="defaultFilters">
  <ref bean="emptyNameFilter" />
  <ref bean="someOtherBean" />
</util:list>

And

<bean id="itemFiltersMapperByFlag"
  class="org.eyal.requestvalidation.flow.itemsfilter.ItemFiltersMapperByFlag">
   <constructor-arg ref="defaultFilters" />
   <constructor-arg ref="filtersByFlag" />
</bean>

Collection of Values in the Properties File
What if I want to set a list (set) of values to pass a bean.
Not a list of beans as described above.
The in the properties file I will put:
validation.acceptedIds=flow1,flow2,flow3,flow4,flow5

And in bean:

<bean id="acceptedIdsValidation"
  class="org.eyal.requestvalidation.flow.example.flow.requestvalidation.validations.AcceptedIdsValidation">
  <constructor-arg value="#{'${validation.acceptedIds}'.split(',')}" />
</bean>

See how I used Spring Expression Language (SpEL)

Map Injection Example
Here’s a sample of an empty map creation:

<util:map id="validationsByFlag">
</util:map>

Here’s a map with some entries.
See how the keys are also set from the properties file.

<util:map id="filtersByFlag">
  <entry key="${flag.external}" value-ref="filtersForExternal" />
  <entry key="${flag.internal}" value-ref="filtersForInternal" />
  <entry key="${flag.even}" value-ref="filtersForEven" />
  <entry key="${flag.odd}" value-ref="filtersForOdd" />
</util:map>


In the map example above we have keys as Strings from the properties file.
The values are reference to other beans as described above.

The usage would be the same as for list:

<bean id="itemFiltersMapperByFlag"
  class="org.eyal.requestvalidation.flow.itemsfilter.ItemFiltersMapperByFlag">
   <constructor-arg ref="defaultFilters" />
   <constructor-arg ref="filtersByFlag" />
</bean>

Conclusion
In this post I showed some basic examples of Spring configuration using XML and properties file.
I strongly believe that until the team is not fully understand the way Spring works, everyone should stick with this kind of configuration.
If you find that you start to get files, which are too big, then you may want to check your design. Annotations will just hide your poorly design system.

Spring and Maven Configuration

This is the first post of a series of posts demonstrating how we to use Spring in an application.
In the series I will show some howtos of technical aspects (context file, properties, etc.).
And I will also show some design aspects and test approach.

In this post I will simply show how to integrate Spring using Maven.

The basic dependency would be the context. Using Maven dependencies, spring-core will be in the project as well.

<dependency>
  <groupId>org.springframework</groupId>
  <artifactId>spring-context</artifactId>
  <version>${spring.version}</version>
</dependency>

If we want to use annotation such as @Inject which comes from Java JSR, we’ll add the following dependency:

<dependency>
  <groupId>javax.inject</groupId>
  <artifactId>javax.inject</artifactId>
  <version>1</version>
</dependency>

And in order to be able to test using Spring, here’s what we’ll need (in here, the scope is test):

<dependency>
  <groupId>org.springframework</groupId>
  <artifactId>spring-test</artifactId>
  <version>${spring.version}</version>
  <scope>test</scope>
</dependency>

You can see that I didn’t add spring-core as it comes with the context / test dependencies.

You can find the code at: https://github.com/eyalgo/request-validation

Some notes about the code.

I added the Spring code, context and the Spring’s Maven dependencies to the test environment.
This is on purpose.
I want to emphasize the separation of the validation-filter framework to the usage and wiring of an application.

In real life, you might have an external library that you’ll want to use it in a Spring injected application.
So the test environment in the code simulates the application and the src is the “external library”.

Request Validation and Filtering by Flags – Redesign and Refactoring

General
In the previous posts I started describing a validation / filtering framework we’re building.
While showing the code, I am trying to show clean code, test orientation and code evolution.
It has some agility in the process; We know the end requirements, but the exact details are evolving over time.

During the development we have changed the code to be more general as we saw some patterns in it.
The code evolved as the flow evolved as well.

The flow as we now understand it
Here’s a diagram of the flow we’ll implement

Request Sequence

Request Sequence

The Pattern
At each step of the sequence (validation, filtering, action), we recognized the same pattern:

  1. We have specific implementations (filters, validations)
  2. We have an engine that wraps up the specific implementations
  3. We need to map the implementations by flag, and upon request’s flags, select the appropriate implementations.
  4. We need to have a class that calls the mapper and then the engine

A diagram showing the pattern

The Pattern

The Pattern

Source Code
In order to show some of the evolution of the code, and how refactoring changed it, I added tags in GitHub after major changes.

Code Examples
Let’s see what came up from the mapper pattern.

public interface MapperByFlag<T> {
  List<T> getOperations(Request request);
}
public abstract class AbstractMapperByFlag<T> implements MapperByFlag<T> {
  private List<T> defaultOperations;
  private Map<String, List<T>> mapOfOperations;

  public AbstractMapperByFlag(List<T> defaultOperations, Map<String, List<T>> mapOfOperations) {
    this.defaultOperations = defaultOperations;
    this.mapOfOperations = mapOfOperations;
  }

  @Override
  public final List<T> getOperations(Request request) {
    Set<T> selectedFilters = Sets.newHashSet(defaultOperations);
    Set<String> flags = request.getFlags();
    for (String flag : flags) {
      if (mapOfOperations.containsKey(flag)) {
        selectedFilters.addAll(mapOfOperations.get(flag));
      }
    }
    return Lists.newArrayList(selectedFilters);
  }
}
  public RequestValidationByFlagMapper(List<RequestValidation> defaultValidations,
    map<String, List<RequestValidation>> mapOfValidations) {
    super(defaultValidations, mapOfValidations);
  }

  public ItemFiltersByFlagMapper(List<Filter> defaultFilters, Map<String, List<Filter>> mapOfFilters) {
    super(defaultFilters, mapOfFilters);
  }

I created a test for the abstract class, to show the flow itself.
The tests of the implementations use Java Reflection to verify that the correct injected parameters are sent to the super.
I am showing the imports here as well. To have some reference for the static imports, mockito and hamcrest packages and classes.

import static org.hamcrest.Matchers.containsInAnyOrder;
import static org.junit.Assert.assertThat;
import static org.mockito.Mockito.when;

import java.util.List;
import java.util.Map;

import org.eyal.requestvalidation.model.Request;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.runners.MockitoJUnitRunner;

import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Lists;
import com.google.common.collect.Sets;

@RunWith(MockitoJUnitRunner.class)
public class AbstractMapperByFlagTest {
	private final static String FLAG_1 = "flag 1";
	private final static String FLAG_2 = "flag 2";

	@Mock
	private Request request;

	private String defaultOperation1 = "defaultOperation1";
	private String defaultOperation2 = "defaultOperation2";
	private String mapOperation11 = "mapOperation11";
	private String mapOperation12 = "mapOperation12";
	private String mapOperation23 = "mapOperation23";

	private MapperByFlag<String> mapper;

	@Before
	public void setup() {
		List<String> defaults = Lists.newArrayList(defaultOperation1, defaultOperation2);
		Map<String, List<String>> mapped = ImmutableMap.<String, List<String>> builder()
		        .put(FLAG_1, Lists.newArrayList(mapOperation11, mapOperation12))
		        .put(FLAG_2, Lists.newArrayList(mapOperation23, mapOperation11)).build();
		mapper = new AbstractMapperByFlag<String>(defaults, mapped) {
		};
	}

	@Test
	public void whenRequestDoesNotHaveFlagsShouldReturnDefaultFiltersOnly() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet());

		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(defaultOperation1, defaultOperation2));
	}

	@Test
	public void whenRequestHasFlagsNotInMappingShouldReturnDefaultFiltersOnly() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet("un-mapped-flag"));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(defaultOperation1, defaultOperation2));
	}
	
	@Test
	public void whenRequestHasOneFlagShouldReturnWithDefaultAndMappedFilters() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet(FLAG_1));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(mapOperation12, defaultOperation1, mapOperation11, defaultOperation2));
	}
	
	@Test
	public void whenRequestHasTwoFlagsShouldReturnWithDefaultAndMappedFiltersWithoutDuplications() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet(FLAG_1, FLAG_2));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(mapOperation12, defaultOperation1, mapOperation11, defaultOperation2, mapOperation23));
	}
}
@RunWith(MockitoJUnitRunner.class)
public class RequestValidationByFlagMapperTest {

	@Mock
	private List<RequestValidation> defaultValidations;
    
	@Mock
	private Map<String, List<RequestValidation>> mapOfValidations;

	@InjectMocks
	private RequestValidationByFlagMapper mapper;

	@SuppressWarnings("unchecked")
    @Test
	public void verifyParameters() throws NoSuchFieldException, SecurityException, IllegalArgumentException,
	        IllegalAccessException {
		Field defaultOperationsField = AbstractMapperByFlag.class.getDeclaredField("defaultOperations");
		defaultOperationsField.setAccessible(true);
        List<RequestValidation> actualFilters = (List<RequestValidation>) defaultOperationsField.get(mapper);
		assertThat(actualFilters, sameInstance(defaultValidations));

		Field mapOfFiltersField = AbstractMapperByFlag.class.getDeclaredField("mapOfOperations");
		mapOfFiltersField.setAccessible(true);
		Map<String, List<RequestValidation>> actualMapOfFilters = (Map<String, List<RequestValidation>>) mapOfFiltersField.get(mapper);
		assertThat(actualMapOfFilters, sameInstance(mapOfValidations));
	}
}

To Do
There are other classes that might be candidate for refactoring of some sort.
RequestFlowValidation and RequestFilter are similar.
And
RequestValidationsEngineImpl and FiltersEngine

To Do 2
Create a Matcher for the reflection part.

Code
As always, all the code can be found at:

A Tag for this post: all-components-in

Conclusion
The infrastructure is almost done.
During this time we are also implementing actual classes for the flow (validations, filters, actions).
These are not covered in the posts, nor in GitHub.
The infrastructure will be wired to a service we have using Spring.
This will be explained in future posts.

Request Validation and Filtering by Flags – Introduction

General

We are working on a service that should accept some kind of request.

The request has List of Items. In the response we need to tell the client whether the request is valid and also some information about each item: is it valid or not. If it’s valid, it will be persisted. If it’s not, it should be filtered out. So the response can have information of how many items are valid (and sent to be persisted) and list of information of the filtered out items.

The request has another metadata in it. It has collection (set) of flags. The filtering and validation is based on the flags of the request. So basically one request may be validated and filtered differently than the other, based on the flags of each request.

We might have general validations / filters that need to be applied to any request, whatever flags it has.

Request Validation and Filtering High level design

Design

Flags Mapping

We’ll hold a mapping of flag-to-filters, and flag-to-validation.

Request

Has flags and items.

Components

Filter, Filter-Engine, Flags-Mapper

Development Approach

Bottom Up

We have a basic request already, as the service is up and running, but we don’t have any infrastructure for flags, flag-mapping, validation and filtering.

We’ll work bottom up; create the mechanism for filtering, enhance the request and then wire it up using Spring.

Coding

I’ll try to show the code using tests, and the development using some kind of TDD approach.

I am using eclipse’s EclEmma for coverage.

General

By looking at the code, you can see usage of JUnit, Mockito, Hamcrest, Google-Guava.

You can also see small classes, and interface development approach.

Source Code

https://github.com/eyalgo/request-validation

Recommended Books

I have a list of books, which I highly recommend.
Each book taught me something different.

It all begun years ago, when I went into interviewing process for my second work place.
I was a junior Java developer, a coder. I didn’t have much experience and more importantly, I did not have a mentor or someone who would direct me. I learned on my own, after a CS Java course. Java 1.4 just came.

One of my first interviewers was a great mentor. We met for an hour (probably). I don’t remember the company.  I don’t remember the job position. I don’t remember his name.
But I DO remember a few things he asked me.
He asked me if I know what TDD was. He asked me about XP.
He also recommended a book: Effective Java by Joshua Bloch

He didn’t even know what a great gift he gave me.

So I went on and bought Effective Java, 1st edition. And TDD by Kent Beck.
That was my first step towards being craftsman.

Effective Java and Refactoring
These two books look as they are not entirely related.
However, both of these books thought me a-lot about design and patterns.
I started to understand how to write code using patterns (Refactoring), and how to do it in Java (Effective).
These books gave me the grounds for best practice in Java and Design Patterns and OOD.

Test Driven Development
I can’t say enough about this book.
At first, I really didn’t understand what it was all about.
But it was part of XP !! (which I didn’t understand as well).
The TDD was left on the shelf until I was ready for it.

Clean Code and The Pragmatic Programmer
Should I say more?
If you haven’t read both, stop everything and go to read.
They are MUST for anyone who wants to be craftsman and takes his / her profession seriously.
These books are also lots of fun to read. Especially the Pragmatic book.

The Clean Coder
If you want to take the next step of being a professional, read it.
I was sometimes frustrated while reading it. I thought to myself how can pass all of this material to my teammates…

Dependency Injection
Somewhat not related, but as I see it, if you don’t use DI, you can’t write clean, testable code.
If you can’t write clean, testable code, you are missing the point of craftsmanship.
The book covers some injectors frameworks, but also describe what is it all about.

Below is a table with the books I have mentioned.

One last remark,
This list does not contain the only books I read.
During the years I have read more technical / professional books, but these made the most difference for me.

Name Author(s) ISBN
Effective Java Joshua Bloch 978-032-135-668-0
Test-Driven Development Kent Beck 978-032-114-653-3
Refactoring Martin Fowler 978-020-148-567-7
Dependency Injection Dhanji R. Prasanna 978-193-398-855-9
Clean Code Robert C. Martin 978-013-235-088-4
The Clean Coder Robert C. Martin 978-013-708-107-3
The Pragmatic Programmer Andrew Hunt , David Thomas 978-020-161-622-4

Coding Exercise Introduction

As part of my job, I do a-lot of architectural designing, OOD, clean code, TDD and everything that thrives to be craftsmanship work.

However, I don’t get to have many problems such as tree traverse, BFS, DFS, lists etc.
We can name these kind of problems as CS1 and CS2 courses problems.
I also don‘t have the opportunity to learn new languages. We‘re writing in Java and there is no reason at the office to start learning a new language. At least not for business purpose.

But, as a professional developer, I want to constantly exercise, sharpen and improve my skills.
So I took upon myself a small project:

  1. Do some basic coding that I usually don’t do
  2. Learn a new language

As for task #1, I already wrote some Java code to problems I thought of, and will try to add more during the weeks to come.

As for task #2, I decides to start learning Ruby. Why Ruby? No particular reason. It’s different from Java and good in the market.
Once I get comfortable with Ruby, my plan is to write the problems in Ruby.

[EDIT]
The code is in GitHub
See Why at: Why GitHub
[EDIT]

Some of the code has nice written tests, and some, sadly to say, I just played around, it is not REALLY, AUTOMATICALLY testes. This is something that must be fixed as well.

These are the problems I already written:

  • Factorial
  • Fibonacci
  • Reverse a list
  • Anagram
  • Palindrome
  • BFS tree traverse

Code at GitHub: https://github.com/eyalgo/brainers-java

Getting Started with Google Guava – Book Review

I recently got my hands (my kindle) on the book: Getting Started with Google Guava by Bill Bejeck.

I love reading technical books and always hope to learn new stuff. As an extensive user of the Guava library, I was really intrigued to see what I was missing from this library and how I could improve the usage of it.

I will not go over it chapter by chapter with explanations, as anyone can check the TOC and see the details of what this book covers. Instead, I will try to give my own impression.

The book covers all aspects of the Guava library. For each aspect, the author shows the most used implementation and mentions other ones.

In nearly every chapter, I was introduced to some gems that immediately went into our own codebase when I started refactoring. That was FUN. And I saw code improvements instantly.

I really enjoyed reading the code examples with the extensive usage of JUnit as showcases for the behavior of the various classes. It’s a great way of showing what the library does. And as a side effect, it shows developers how a test is used as the specs of the code.

It seems that the author was very meticulous in writing clean and testable code. Two areas, which I think are, well, the most important for being a professional developer (a craftsman).

I think that this book is great for both newbies and experienced Guava users.
I think it is also great for developers who want to have some kind of knowledge on how to write clean and better code.