Continuous (Pre) Planning

Introduction

In this post I want to introduce the concept of continuously planning. A process flow that we established in my team.

Pre Planning – Short Description

One of the key meetings in a scrum cycle is the Pre-Planning.

Usually it’s a long meeting and after that, there are hours of writing subtasks and time estimations.

In this meeting the team members understand the user stories, challenge the PMs and do estimations. Later the developers will decide on solutions and break them into subtasks.

Background About the Team

Team Structure

My team is a cross-functional one. I have frontend and backend developers, as well as more DevOps oriented engineers.

It means that developers sometimes have independent tasks, which are related only to one or two of them.

Products We Own

We are responsible for three major products, and some minor ones.

In one sprint we work on different products. Those products are usually unrelated to each other. Meaning independent tasks.

Team Culture

In the beginning the team was small, as well as the company.

We had a “startup culture”.

Deliver fast, no planning, no estimations.

People hate long, boring meetings. Feel waste of time.

The Problem

As it turned out, there were many problems in the way we tried to work.

There were common issues that were raised in our retrospectives, as well as my personal observations.

Retrospective

Here’s a list of the main issues we noticed

  • During planning, there were long discussions that were relevant to one engineer and the PM (as result of different products and cross-functional team). People were bored and lost concentration
  • Many user stories were not even started because we discovered that the PMs did not clarify them. We discovered that in the middle of the sprint.
  • User stories were not done because lack of planning
  • Overestimated / underestimated sprints content
  • Lack of ownership. Features were not done because dependencies issues were not thought in advance.

Solution – Continuous Planning

At some point we started doing things differently. After several iterations, it became the continuous planning.

The idea was simple. I asked our PM to provide user stories of next Sprint when current Sprint starts.

Then I checked them and assigned to the relevant developers or pushed back to the PM.

The developers started looking at the user stories, understood them and added planning using subtasks and time estimations.

So basically we had almost two weeks to plan for the next sprint.

How It Works

User Stories are ready in advance

The crucial part here, is the timing, when are the user story ready?

Let’s say the Sprint duration is two weeks and it starts on Wednesday.

The idea is simple: On Wednesday, when Sprint 18 starts, then PM should already provide all user stories for Sprint 19.

By provide I mean have them:

  1. In backlog, under Sprint 19 (we use JIRA, so we create sprints in advance)
  2. Well defined (Description, DoD, etc.)

First Check

The user stories are assigned to me and I go over them. I verify that they are clear with proper Definition of Done. I am identifying dependencies between the user stories. Then I either assign the user story to a developer or reassign it to the PM, challenging about the content or priority.

Discovery and Planning By Each Engineer

From now on, each engineer, during current Sprint, will start planning the assigned user stories for next sprint. He will talk directly to the PM for clarifications and will identify dependencies.

He has full authority to challenge the user story for not being clear. He can reassign it back to me or the PM.

Collaboration

If there is dependency within the team, like FE and BE, then the engineers themselves will talk about it and assign the relevant subtasks in the user stories.

Estimations

The estimations are part of the planning.

Each developer will add his own subtasks with time estimation.

Timing Goal – It’s All About “When”

The key element for success in this process is timing. Our goal is that all user stories for next sprint will be fully provided in the beginning of current sprint.

We also aim to have all user stories planned (subtasks + estimations) 1-2 days before next sprint starts.

Observations and conclusion

Our process is still improving. Currently around 10%-20% of the user stories still come at the last minute, violating the timing goal.

We also encounter dependencies, which we didn’t find while planning.

Here’s a list of pros and cons I already see

Pros

Responsibilities and Ownership

One of the outcomes, which I didn’t anticipate, is that each developer has much more responsibility and ownership on the user stories.

The engineer must think of the requirement, then design and find dependencies, besides just the execution.

On boarding new team members

New team members arrived and had a user story assigned to them on their first day in the office. So “they jumped into the cold water”, and started understanding the feature, system and code almost immediately.

Collaboration

As each team member is responsible of the entire feature, it increased the collaboration between the team members. There is constant discussion between the team members.

It has also increased the collaboration and communication between developers and PMs.

PM Work

As the PM works harder (see cons), the continuously planning forces him to have better planning ahead.

This process “forced” the PM to have clear vision of 3-4 sprints in advance. This clear vision is transparent to everyone, as it is reflected in the JIRA backlog board.

Visibility and Planning Ahead

The clear vision of the PM is reflected in the backlog (JIRA board in our case), make it more transparent, As the board is usually filled with backlog, which is divided to sprints the visibility of future plan is much better.

Challenges (cons)

PM Work

It seems that the PM has more work. User stories should be ready in 1-2 weeks in advance.

The PM needs to work on future sprints (plural) while answering questions about next Sprint and verifying current sprint status.

Questionable Capacity

When there is a dedicated meeting / day for the preplanning, it’s easier to measure the capacity of the team.

It’s harder to understand the real capacity of the team while the developers spend time on planning next sprint during current one.

Architectural and design decisions

Everyone needs to be much more careful in architectural and design of the system. As each developer plan his part, he needs to be more aware of plans of other developers.

This where the manager / lead should assist. Checking that everyone is aligned and make sure there’s good communication.

Lose Control

The lead / manager has less control. Meaning, not everything passes through him.
If you’re micromanager, you will need to let go.

We identified points were the manager (me) must be involved.

  • Dependencies within the team and / or with other teams
  • Architectural / design decisions
  • System behavior

Conclusion

We established a well understood, simple to follow, clear process.

This process is good for our team. It may be good for other teams, perhaps with some adjustments.

As described above, if

  1. There are different roles in the team (frontend, backend)
  2. The team works on different products / projects in the same sprint
  3. People feel that the pre planning meeting is a waste of time

Then perhaps continuous planning is a good approach.

This post was originally published in our company's tech blog:
 http://techblog.applift.com/continuous-pre-planning

Linkedin Twitter facebook github

Advertisement

Working with Legacy Test Code

Legacy Code and Smell by Tests

Working with unit tests can help in many ways to improve the code-base.
One of the aspects, which I mostly like, is that tests can point us to code smell in the production code.
For example, if a test needs large setup or assert many outputs, it can point that the unit under test doesn’t follow good design, such as SRP and other OOD.

But sometimes the tests themselves are poorly structured or designed.
In this post I will give two examples for such cases, and show how I solved it.

Test Types

(or layers)
There are several types, or layers, of tests.

  • Unit Tests
    Unit test should be simple to describe and to understand.
    Those tests should run fast. They should test one thing. One unit (method?) of work.
  • Integration Tests
    Integration tests are more vague in definition.
    What kind of modules do they check?
    Integration of several modules together? Dependency-Injector wiring?
    Test using real DB?
  • Behavioral Tests
    Those tests will verify the features.
    They may be the interface between the PM / PO to the dev team.
  • End2End / Acceptance / Staging / Functional
    High level tests. May run on production or production-like environment.

Complexity of Tests

Basically, the “higher level” the test, the more complex it is.
Also, the ratio between possible number of tests and production code increase dramatically per test level.
Unit tests will grow linearly as the code grows.
But starting with integration tests and higher level ones, the options start to grow in exponential rate.
Simple calculation:
If two classes interact with each other, and each has 2 methods, how many option should we check if we want to cover all options? And imagine that those methods have some control flow like if.

Sporadically Failing Tests

There are many reasons for a test to be “problematic”.
One of the worst is a test that sometimes fails and usually passes.
The team ignores the CI’s mails. It creates noise in the system.
You can never be sure if there’s a bug or something was broken or it’s a false alarm.
Eventually we’ll disable the CI because “it doesn’t work and it’s not worth the time”.

Integration Test and False Alarm

Any type of test is subject for false alarms if we don’t follow basic rules.
The higher test level, there’s more chance for false alarms.
In integration tests, there’s higher chance for false alarms due to external resources issues:
No internet connection, no DB connection, random miss and many more.

Our Test Environment

Our system is “quasi legacy”.
It’s not exactly legacy because it has tests. Those test even have good coverage.
It is legacy because of the way it is (un)structured and the way the tests are built.
It used to be covered only by integration tests.
In the past few months we started implementing unit tests. Especially on new code and new features.

All of our integration tests inherit from BaseTest, which inherits Spring’s AbstractJUnit4SpringContextTests.
The test’s context wires everything. About 95% of the production code.
It takes time, but even worse, it connects to real external resources, such as MongoDB and services that connect to the internet.

In order to improve tests speed, a few weeks ago I change MongoDB to embedded. It improved the running time of tests by order of magnitude.

This type of setup makes testing much harder.
It’s very difficult to mock services. The environment is not isolated from the internet and DB and much more.

After this long introduction, I want to describe two problematic tests and the way I fixed them.
Their common failing attribute was that they sometimes failed and usually passed.
However, each failed for different reason.

Case Study 1 – Creating Internet Connection in the Constructor

The first example shows a test, which sometimes failed because of connection issues.
The tricky part was, that a service was created in the constructor.
That service got HttpClient, which was also created in the constructor.

Another issue, was, that I couldn’t modify the test to use mocks instead of Spring wiring.
Here’s the original constructor (modified for the example):

private HttpClient httpClient;
private MyServiceOne myServiceOne;
private MyServiceTwo myServiceTwo;

public ClassUnderTest(PoolingClientConnectionManager httpConnenctionManager, int connectionTimeout, int soTimeout) {
	HttpParams httpParams = new BasicHttpParams();
	HttpConnectionParams.setConnectionTimeout(httpParams, connectionTimeout);
	HttpConnectionParams.setSoTimeout(httpParams, soTimeout);
	HttpConnectionParams.setTcpNoDelay(httpParams, true);
	httpClient = new DefaultHttpClient(httpConnenctionManager, httpParams);

	myServiceOne = new MyServiceOne(httpClient);
	myServiceTwo = new MyServiceTwo();
}

The tested method used myServiceOne.
And the test sometimes failed because of connection problems in that service.
Another problem was that it wasn’t always deterministic (the result from the web) and therefore failed.

The way the code is written does not enable us to mock the services.

In the test code, the class under test was injected using @Autowired annotation.

The Solution – Extract and Override Call

Idea was taken from Working Effectively with Legacy Code.

  1. Identifying what I need to fix.
    In order to make the test deterministic and without real connection to the internet, I need access for the services creation.
  2. I will introduce a protected methods that create those services.
    Instead of creating the services in the constructor, I will call those methods.
  3. In the test environment, I will create a class that extends the class under test.
    This class will override those methods and will return fake (mocked) services.

Solution’s Code

public ClassUnderTest(PoolingClientConnectionManager httpConnenctionManager, int connectionTimeout, int soTimeout) {
	HttpParams httpParams = new BasicHttpParams();
	HttpConnectionParams.setConnectionTimeout(httpParams, connectionTimeout);
	HttpConnectionParams.setSoTimeout(httpParams, soTimeout);
	HttpConnectionParams.setTcpNoDelay(httpParams, true);
	
	this.httpClient = createHttpClient(httpConnenctionManager, httpParams);
	this.myserviceOne = createMyServiceOne(httpClient);
	this.myserviceTwo = createMyServiceTwo();
}

protected HttpClient createHttpClient(PoolingClientConnectionManager httpConnenctionManager, HttpParams httpParams) {
	return new DefaultHttpClient(httpConnenctionManager, httpParams);
}

protected MyServiceOne createMyServiceOne(HttpClient httpClient) {
	return new MyServiceOne(httpClient);
}

protected MyServiceTwo createMyServiceTwo() {
	return new MyServiceTwo();
}
private MyServiceOne mockMyServiceOne = mock(MyServiceOne.class);
private MyServiceTwo mockMyServiceTwo = mock(MyServiceTwo.class);
private HttpClient mockHttpClient = mock(HttpClient.class);

private class ClassUnderTestForTesting extends ClassUnderTest {

	private ClassUnderTestForTesting(int connectionTimeout, int soTimeout) {
		super(null, connectionTimeout, soTimeout);
	}
	
	@Override
	protected HttpClient createHttpClient(PoolingClientConnectionManager httpConnenctionManager, HttpParams httpParams) {
		return mockHttpClient;
	}

	@Override
	protected MyServiceOne createMyServiceOne(HttpClient httpClient) {
		return mockMyServiceOne;
	}

	@Override
	protected MyServiceTwo createMyServiceTwo() {
		return mockMyServiceTwo;
	}
}

Now instead of wiring the class under test, I created it in the @Before method.
It accepts other services (not described here). I got those services using @Autowire.

Another note: before creating the special class-for-test, I ran all integration tests of this class in order to verify that the refactoring didn’t break anything.
I also restarted the server locally and verified everything works.
It’s important to do those verification when working with legacy code.

Case Study 2 – Statistical Tests for Random Input

The second example describes a test that failed due to random results and statistical assertion.

The code did a randomize selection between objects with similar attributes (I am simplifying here the scenario).
The Random object was created in the class’s constructor.

Simplified Example:

private Random random;

public ClassUnderTest() {
	random = new Random();
	// more stuff
}

//The method is package protected so we can test it
MyPojo select(List<MyPojo> pojos) {
	// do something
	int randomSelection = random.nextInt(pojos.size());
	// do something
	return pojos.get(randomSelection);
}

The original test did a statistical analysis.
I’ll just explain it, as it is too complicated and verbose to write it.
It had a loop of 10K iterations. Each iteration called the method under test.
It had a Map that counted the number of occurrences (returned result) per MyPojo.
Then it checked whether each MyPojo was selected at (10K / Number-Of-MyPojo) with some kind of deviation, 0.1.
Example:
Say we have 4 MyPojo instances in the list.
Then the assertion verified that each instance was selected between 2400 and 2600 times (10K / 4) with deviation of 10%.

You can expect of course that sometimes the test failed. Increasing the deviation will only reduce the number of false fail tests.

The Solution – Overload a Method

  1. Overload the method under test.
    In the overloaded method, add a parameter, which is the same as the global field.
  2. Move the code from the original method to the new one.
    Make sure you use the parameter of the method and not the class’s field. Different names can help here.
  3. Tests the newly created method with mock.

Solution Code

private Random random;

// Nothing changed in the constructor
public ClassUnderTest() {
	random = new Random();
	// more stuff
}

// Overloaded method
private select(List<MyPojo> pojos) {
	return select(pojos, this.random);
}

//The method is package protected so we can test it
MyPojo select(List<MyPojo> pojos, Random inRandom) {
	// do something
	int randomSelection = inRandom.nextInt(pojos.size());
	// do something
	return pojos.get(randomSelection);
}

Conclusion

Working with legacy code can be challenging and fun.
Working with legacy test code can be fun as well.
It feels really good to stop receiving annoying mails of failing tests.
It also increase the trust of the team on the CI process.

Linkedin Twitter facebook github

It’s All About Tests – Part 3

In the previous two posts I discussed mostly about the philosophy and attitude of developing with testing.
In this post I give some tips and tools examples for testing.

Tools

JUnit
http://junit.org/
There’s also TestNG, which is great tool. But I have much more experience with JUnit so I will describe this framework.
1. Use the latest version.
2. Know your testing tool!

  • @RunWith
    This is class annotation. It tells JUnit to run with different Runner (mockito and Spring runners are the most common runners I use)

    import org.mockito.runners.MockitoJUnitRunner;
    ...
    @RunWith(MockitoJUnitRunner.class)
    public class MyClassTest {
      ...
    }
    
    @RunWith(SpringJUnit4ClassRunner.class)
    @ContextConfiguration(locations = { "/META-INF/app-context.xml","classpath:anotherContext.xml" })
    public class MyClassTest {
      ...
    }
    // You can inherit AbstractJUnit4SpringContextTests instead of using runner
    
  • @Rule
    kind of AOP.
    The most common out-of-the-box rule, is the TemporaryFolder Rule. It lets you use the file system without worrying about opening and closing files.
    An example of Rules can be found here.
  • Parameterized runner
    Really cool tool. It lets you run the same test with different input and different expected output.
    It might be abused and make a atest unreadable.
  • Test Data Preparation and Maintenance Tips

hamcrest
http://hamcrest.org/JavaHamcrest/
This library is “extension” of JUnit.
I can’t work without it 🙂
Hamcrest library gives us out-of-the-box matchers.
Matchers are used with the assertThat(...,Matcher) flavor.
I almost always use this flavor.
(In the previous post, someone suggested that I shouldn’t use assertTrue(…), but instead use assertThat.)

There are plenty type of matchers:
You can verify existing objects in collection ignoring order.
You can check greater than.
The test is more readable using the assertThat + matcher.

assertThat(mapAsCache.containsKey(new CacheKey("valA", "valB")), is(true));
assertThat(cachPairs.size(), is(2));
assertThat(enity.getSomething(), nullValue(Double.class));
assertThat(event.getType(), equalTo(Type.SHOWN));
assertThat(bits, containsInAnyOrder(longsFromUsIndexOne, longsFromUsIndexZero));

You can create your own Matcher. It’s very easy.
Here’s an example of matchers that verify Regular Expressions. https://github.com/eyalgo/junit-additions

mockito
https://code.google.com/p/mockito/
This is the second library I can’t work without.
It lets you mock dependencies of the class under test.

Using mockito you mock dependency.
Then you “tell” the mock object how to behave in certain inputs.
You tell it what to return if some input entered.
You can verify input arguments to a called method.
You can verify that a certain method was called (once, never, 3 times, etc.)
You can check the order of method / mocks calls.

Check this out:

package eyalgo;
import static org.hamcrest.Matchers.equalTo;
import static org.mockito.Matchers.anyString;
import static org.mockito.Matchers.argThat;
import static org.mockito.Mockito.inOrder;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.never;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.verifyNoMoreInteractions;
import static org.mockito.Mockito.verifyZeroInteractions;
import static org.mockito.Mockito.when;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InOrder;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.MockitoAnnotations;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.runners.MockitoJUnitRunner;
import org.mockito.stubbing.Answer;
//The RunWith automatically instantiate fields with @Mock annotation
//and injects to the tested class @InjectMocks
@RunWith(MockitoJUnitRunner.class)
public class NameConnectorTest {
@Mock
private NameConvention nameConventionAsMockField;
@InjectMocks
private NameConnector connector;
private NameConvention nameConventionAsMockOther;
@Before
public void setup() {
//This is another way to inject mocks (instead of the annotations above)
MockitoAnnotations.initMocks(this);
nameConventionAsMockOther = mock(NameConvention.class);
NameConnector otherConnector = new NameConnector(nameConventionAsMockOther);
}
@Test
public void showSomeMockitoExamples() {
NameConvention nameConventionAsMock = mock(NameConvention.class, "Name for this mock");
// Stub and tell your mock to do something
when(nameConventionAsMock.bigBangConvention("INPUT")).thenReturn("Some output");
// Throw exception for some input
when(nameConventionAsMock.bigBangConvention("Other INPUT")).thenThrow(new RuntimeException("oops"));
// Do more complicated stuff in the "when"
Answer<String> answer = new Answer<String>() {
@Override
public String answer(InvocationOnMock invocation) throws Throwable {
//do something really complicated
return "some output";
}
};
//Show also hamcrest matchers
when(nameConventionAsMock.bigBangConvention(argThat(equalTo("my name is Inigo Montoya")))).then(answer);
// Run the test..
//Verify some calls
verify(nameConventionAsMock).bigBangConvention("INPUT");
verify(nameConventionAsMock, times(10)).bigBangConvention("wow");
// Verify that the method was never called. With any input
verify(nameConventionAsMock, never()).bigBangConvention(anyString());
verifyNoMoreInteractions(nameConventionAsMock);
verifyZeroInteractions(nameConventionAsMockField);
//Check order of calls
InOrder order = inOrder(nameConventionAsMock, nameConventionAsMockOther);
order.verify(nameConventionAsMock).bigBangConvention("INPUT");
order.verify(nameConventionAsMock).bigBangConvention("other INPUT");
}
}

Other Mocking Tools

  • PowerMock and EasyMock
    These two are very useful when working with legacy code.
    They allow you to test private methods, static methods and more things that you normally can’t.
    I think that if you need them, then something is wrong with the design.
    However, sometimes you use external libraries with singletons and/or static methods.
    Sometimes you work on legacy code, which is not well suited for testing.
    On these types of scenarios, then those mocking libraries can help
    https://code.google.com/p/powermock/
    http://easymock.org/
  • JMockit http://jmockit.github.io/
  • jMock http://jmock.org/

JBehave
http://jbehave.org/
JUnit, mockito, hamcrest are used for unit tests.
JBehave is not exactly the same.
It is a tool for Behavior-Driven-Development (BDD)
You write stories which are backed up by code (Java) and then you run them.

JBehave can be used for higher level tests, like functional tests.
Using JBehave, it’s easier to test a flow in the system.
It follows the Given, When, Then sequence.

If you take it to the next step, it can be a great tool for communication.
The product owner can write the scenarios, and if all is green, by the end of the iteration, then we passed the definition of done.

cucumber is another BDD tool.

Dependency Injection
In order to have testable code, among other things, you need to practice DI (dependency injection).
The reason is simple:
If you instantiate a dependency in a constructor (or method) of a class under test, then how can you mock it?
If you can’t mock the dependency, then you are bound to it. And you can’t simulate different cases.

Many application have Spring as the DI container, but less developers take the advantage of using the injection for testing.

Metrics
Use SONAR in your CI environment.
Check code coverage using cobertura or other tools.
Use Jenkins / Hudson / Other CI tool for automation.

IDE
Your IDE can help you writing tests.
For eclipse, I have two recommendations:

  1. MoreUnit is cool plugin that helps writing tests faster.
  2. In eclipse, CTRL+Space can give you hints and fill imports. But not static imports.
    Most (all?) libraries use static imports.
    So you can add the testing libraries as favorites and then eclipse will fill them for you.
  3. eclipse favorites

    eclipse favorites

POM
Here’s part of POM for testing libraries.

<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<archive>
<addMavenDescriptor>false</addMavenDescriptor>
</archive>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>jar-no-fork</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<archive>
<manifest>
<mainClass>com.startapp.CouchRunner.GetUserProfile</mainClass>
</manifest>
</archive>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
</plugin>
</plugins>
</build>
view raw pom.xml hosted with ❤ by GitHub

You can use profiles to separate unit testing with integration tests.

Linkedin Twitter facebook github

It’s All About Tests – Part 2

This is the second post of the series about testing.
In the first part I explained about the mindset we need to have while developing with tests. Or, in better words, developing for testable code.
In this part I will cover some techniques for testing approach.
The techniques I will describe can be seen as how to transform the mindset into actions.

Techniques

Types Of Tests
Types of tests are layers of what we test.

The most obvious one is the unit test.
Using JUnit (or TestNG, or any other tool), you will test the behavior of your code.
Each test should check one behavior of the class/method under test.

Another layer of tests, which usually done by developers, is what I like to call integration tests.
This type of test will usually be part of the code (under the test directory).

Integration tests may test several classes together.
They may test partial flow.

I like to test Spring wiring, verifying that the context file is correct. For example, if I have injected list of beans and the order is important.
Testing the wiring can be considered as integration test.
Another example would be checking the integration of a DAO class and the class that uses it. Sometimes there are “surprises” in these parts.

As a higher degree of tests, you will want to test request and response (REST).
If you have GUI, make an automated test suit for that as well.

Automation
Automate your full development cycle.
Use CI service, such as Hudson/Jenkins
Add your JUnit, selenium, JMeter, JBehave to your CI environment.

I suggest the following:
1. CI that checks the SCM for changes and runs whenever there is a change.
2. Nightly (or every few hours). A slower automation test suit that check more stuff, like integration tests.
The nightly can be slower.
If you do continuous deployment, then your setup may be different.

Environment
Have dedicated environment for testing.
DB that can be cleared and refilled.
If you work on REST service, have a server just for your test and automation environment.
If you can, try making it as similar as possible to production environment.

Stub, Mock
There are frameworks for stubbing and mocking.
But first understand what it means.
There’s a slight difference between stubbing and mocking.
Basically they both fake a real object (or interface).
You can tell the fake object to behave as you want in certain input.
You could also verify that it was called with expected parameters.
(more about it in next post)

Usage of External Resources
You can fake DB, or you can use some kind of embedded database.
Embedded database helps you isolate tests that include DB.
Same thing for external services.

Descriptive Tests

  • Add the message parameter.
    assertTrue("Cache pairs is not size 2", cachPairs.size() == 2);
    

    It has at least two benefits:
    1. The test is more readable
    2. When it fails, the message is clearer

    How many times you couldn’t tell what went wrong because there was no message? The failing test was assertTrue(something), Without the message parameter.

  • Name you tests descriptively.
    Don’t be afraid to have test-methods with (very) long name.
    It really helps when the test fails.
    Don’t name a test something like: public void testFlow(){...}
    It doesn’t mean anything.
  • Have naming convention.
    I like to name my tests: public void whenSomeInput_ThenSomeOutput() {...}
    But whatever you like to name your tests, try to follow some convention for all tests.

Test Structure
Try to follow the:
Given, When, Then sequence.
Given is the part where you create the test environment (create embedded DB, set certain values etc.)
It is also the part where you tell your mocks (more about it next post) how to behave.
When is the part where you run the tested code.
Then is where you check the result using assertions.
It’s the part where you verify that methods were called. Or not.

If it’s hard to keep an orderly structure, then consider it as test-smell (see previous post).

Unit Tests Should Run Fast
A unit test of class should run 1-5 seconds. Not more.
You want the quickest feedback whether something failed.
You will also want to run the unit tests as many times as possible.
If a test for one class takes around 30-60 seconds, then usually we won’t run it.

Running a full test suit on all your project should not take more than a few minutes (more than 5 is too much).

Coverage
Tests should coverage all your production code.
Coverage helps spot code which is not tested.
If it’s hard to cover some code, for instance due to many code branches (if-else), then again, you have test smell.
If you practice TDD, then you automatically have very high coverage.

Important: Do not make code coverage as the goal.
Code coverage is a tool. Use it.

TDD
Allow me not to add anything here…

Conclusion
In this post I gave some more ways, more concrete, on how to approach development with tests.
In the following post I will give some pointers and tips on how to work with the available tools.

Linkedin Twitter facebook github

It’s All About Tests – Part 1

This post is the first of a series of three.
1. Mindset of testing
2. Techniques
3. Tools and Tips

The Mindset

Testing code is something that needs to be learned. It takes time to absorb how to do it well.
It’s a craft that one should always practice and improve.

Back in the old days, developers did not test, they checked their code.
Here’s a nice twit about it:

Today we have many tools and techniques to work with.
XUnit frameworks, mock frameworks, UI automation, TDD, XP…

But I believe that testing starts with the mind. State of mind.

Why Testing
Should I really answer that?
Tests are your code harness and security for quality.
Tests tell the story of your code. They prove that something works.
They give immediate feedback if something went wrong.
Working with tests correctly makes you more efficient and effective.
You debug less and probably have less bugs, therefore you have more time to do actual work.
Your design will be better (more about it later) and maintainable.
You feel confident changing your code (refactor). More about it later.
It reduces stress, as you are more confident with your code.

What to Test
I say everything.
Perhaps you will skip the lowest parts of your system. The parts that reads/writes to the file system or the DB or communicate some external service.
But even these parts can be tested. And they should.
In following blogs I will describe some techniques how to do that.

Test even the smallest thing. For example, if you have a DTO and you decide that a certain field will be initialized with some value, then make a test that only instantiate this class and then verify (assert) the expected value.
(and yes, I know, some parts really cannot be tested. but they should remain minimal)

SRP
Single Responsibility Principle
This is how I like to refer to the point that a test needs to check one thing.
If it’s a unit test, then it should test one behavior of your method / class.
Different behavior should be tested in a different test.
If it’s a higher level of test (integration, functional, UI), then the same principle applies.
Test one flow of the system.
Test a click.
Test adding elements to DB correctly, but not deleting in the same test.

Isolation
Isolated test helps us understand exactly what went wrong.
Developing isolated test helps us concentrate on one problem at a time.

One aspect of isolation is related to the SRP. When you test something, isolate the tested code from other part (dependencies).
That way you test only that part of the code.
If the test fails, you know were it was.
If you have many dependencies in the test, it is much harder to understand what the actual cause of failure was.

But isolation means other things as well.
It means that no test would interfere another.
It means that the running order of the tests doesn’t matter.
For a unit test, it means that you don’t need a DB running (or internet connection for that matter).
It means that you can run your tests concurrently without one interfere the other (maven allows exactly this).
If you can’t do it (example: DB issues), then your tests are not isolated.

Test Smells
When the test is too hard to understand / maintain, don’t get mad on it 🙂
Say

thank you very much, my dear test, for helping me improve the code

If it is too complicated to setup environment for the test, then probably the unit being tested has too many dependencies.

If after running a method under test, you need to verify many aspects (verify, assert, etc.), the method probably does too much.
The test can be your best friend for code improvement

Usually a really complicated test code means less structured production code.
I usually see correlation between complicated test and code that doesn’t follow the SRP, or any other DOLID principles.

Testable Code
This is one of my favorites.
Whenever I do code review I ask the other person: “How are you going to test it?”, “How do you know it works?”
Whenever I code, I ask myself the same question. “How can I test this piece of code?”

In my experience, thinking always on how to create testable code, yields much better design.
The code “magically” has more patterns, less duplication, better OOD and behaves SOLIDly.

Forcing yourself to constantly test your code, makes you think.
It helps divide big, complicated problem into many (or few) smaller, more trivial ones.

If your code is testable and tested, you have more confident on it.
Confident on the behavior and confident to change it. Refactor it.

Refactoring
This item can be part of the why.
It can be also part of the techniques.
But I decided to give it special attention.
Refactoring is part of the TDD cycle (but not only).
When you have tests, you can be confident doing refactoring.
I think that you need to “think about refactoring” while developing. Similar to “think how to produce testable code”.
When thinking refactoring, testing comes along.

Refactoring is also state of mind. Ask yourself: “Is the code I produced clean enough? Can I improve it?”
(BTW, know when to stop…)

This was the first post of a series of posts about testing.
The following post will be about some techniques and approaches for testing.

Linkedin Twitter facebook github

Why Abstraction is Really Important

Abstraction
Abstraction is one of the key elements of good software design.
It helps encapsulate behavior. It helps decouple software elements. It helps having more self-contained modules. And much more.

Abstraction makes the application extendable in much easier way. It makes refactoring much easier.
When developing with higher level of abstraction, you communicate the behavior and less the implementation.

General
In this post, I want to introduce a simple scenario that shows how, by choosing a simple solution, we can get into a situation of hard coupling and rigid design.

Then I will briefly describe how we can avoid situation like this.

Case study description
Let’s assume that we have a domain object called RawItem.

public class RawItem {
    private final String originator;
    private final String department;
    private final String division;
    private final Object[] moreParameters;
    
    public RawItem(String originator, String department, String division, Object... moreParameters) {
        this.originator = originator;
        this.department = department;
        this.division = division;
        this.moreParameters = moreParameters;
    }
}

The three first parameters represent the item’s key.
I.e. An item comes from an originator, a department and a division.
The “moreParameters” is just to emphasize the item has more parameters.

This triplet has two basic usages:
1. As key to store in the DB
2. As key in maps (key to RawItem)

Storing in DB based on the key
The DB tables are sharded in order to evenly distribute the items.
Sharding is done by a hash key modulo function.
This function works on a string.

Suppose we have N shards tables: (RAW_ITEM_REPOSITORY_00, RAW_ITEM_REPOSITORY_01,..,RAW_ITEM_REPOSITORY_NN),
then we’ll distribute the items based on some function and modulo:

String rawKey = originator + "_"  + department + "_" + division;
// func is String -> Integer function, N = # of shards
// Representation of the key is described below
int shard = func(key)%N;

Using the key in maps
The second usage for the triplet is mapping the items for fast lookup.
So, when NOT using abstraction, the maps will usually look like:

Map<String, RawItem> mapOfItems = new HashMap<>();
// Fill the map...

“Improving” the class
We see that we have common usage for the key as string, so we decide to put the string representation in the RawItem.

// new member
private final String key;

// in the constructor:
this.key = this.originator + "_" + this.department + "_"  + this.division;

// and a getter
public String getKey() {
  return key;
}

Assessment of the design
There are two flows here:
1. Coupling between the sharding distribution and the items’ mapping
2. The mapping key is strict. any change forces change in the key, which might introduce hard to find bugs

And then comes a new requirement
Up until now, the triplet: originator, department and division made up a key of an item.
But now, a new requirement comes in.
A division can have subdivision.
It means that, unlike before, we can have two different items from the same triplet. The items will differ by the subdivision attribute.

Difficult to change
Regarding the DB distribution, we’ll need to keep the concatenated key of the triplet.
We must keep the modulo function the same. So distribution will remain using the triplets, but the schema will change and hava ‘subdivision’ column as well.
We’ll change the queries to use the subdivision together with original key.

In regard to the mapping, we’ll need to do a massive refactoring and to pass an ItemKey (see below) instead of just String.

Abstraction of the key
Let’s create ItemKey

public class ItemKey {
    private final String originator;
    private final String department;
    private final String division;
    private final String subdivision;

    public ItemKey(String originator, String department, String division, String subdivision) {
        this.originator = originator;
        this.department = department;
        this.division = division;
        this.subdivision = subdivision;
    }

    public String asDistribution() {
        return this.originator + "_" + this.department + "_"  + this.division;
    }
}

And,

Map<ItemKey, RawItem> mapOfItems = new HashMap<>();
// Fill the map...
    // new constructor for RawItem
    public RawItem(ItemKey itemKey, Object... moreParameters) {
        // fill the fields
    }

Lesson Learned and conclusion
I wanted to show how a simple decision can really hurt.

And, how, by a small change, we made the key abstract.
In the future the key can have even more fields, but we’ll need to change only the inner implementation of it.
The logic and mapping usage should not be changed.

Regarding the change process,
I haven’t described how to do the refactoring, as it really depends on how the code looks like and how much is it tested.
In our case, some parts were easy, while others were really hard. The hard parts were around code that was looking deep in the implementation of the key (string) and the item.

This situation was real
We actually had this flow in our design.
Everything was fine for two years, until we had to change the key (add the subdivision).
Luckily all of our code is tested so we could see what breaks and fix it.
But it was painful.

There are two abstraction that we could have initially implement:
1. The more obvious is using a KEY class (as describe above). Even if it only has one String field
2. Any map usage need to be examined whether we’ll benefit by hiding it using abstraction

The second abstraction is harder to grasp and to fully understand and implement.

So,
do abstraction, tell a story and use the interfaces and don’t get into details while telling it.

Linkedin Twitter facebook github

The Foreman Role in a Team

There is a lot of discussion about the need for a foreman role in a software team.
Robert C. Martin wrote about it in Where is the Foreman?

I recently read a post by David Tanzer who disagrees with Uncle Bob’s point: We don’t need a foreman

The way I see it, a foreman role is important, but perhaps not as extreme as Uncle Bob describes.
Let’s start by quoting the foreman’s role as Uncle Bob describes (in construction and in software):

The foreman on a construction site is the guy who is responsible for making sure all the workers do things right. He’s the guy with the tape-measure that goes around making sure all the walls are placed properly. He’s the guy who examines all the struts, joists, and beams to make sure they are installed correctly, and don’t have any significant defects. He’s the guy who counts the screws in the flooring to make sure it won’t squeak when you walk on it. He’s the guy — the guy who takes responsibility — the guy who makes sure everything is done right.

What would the foreman do on software project? He’d do the same thing he does on a construction project. He’d make sure everything was done, done right, and done on time. He’d be the only one with commit rights. Everybody else would send him pull requests. He’d review each request in turn and reject those that didn’t have sufficient test coverage, or that had dirty code, or bad variable names, or functions that were too long. He’d reject those that, in his opinion, did not meet the level of quality he demands for the project.

I think that the commit rights is crucial point for many critics of this idea.

I would like to suggest a middle way.
A foreman role, but also team’s responsibility.

In real life the team is diverse. Some are seniors, some juniors. Some are expert in a specific field, others in another field. Some are expert in TDD, some in design, some in DB and SQL etc.

Here are the key points as I see it:
A diverse team.
A foreman who’s responsible for the quality and delivery.
The foreman will sometimes make the final call. Even if there is still a disagreement.
The foreman is also responsible that everyone works at the standards he introduced (with the help of the team).
Everyone can commit. Not just the foreman.
Everyone can suggest anything.

Here are some reasons why it might work.

  1. Everyone can commit and everyone can see others’ commits. This means that there is trust between the team members. It also gives each member more responsibility. The foreman in this case will still look for all the things that Uncle Bob describes. But when he sees something wrong (missing test? A code that is not well designed?) then he will approach the person who committed the code and discuss what went wrong. The foreman will have an opportunity to mentor other team members and pass his knowledge.
  2. The foreman can be the peer, with more responsibilities. If Fred notices that people make mistakes, he will discuss it. The foreman has more responsibility. He needs to know to listen. He needs to explain and not blame.
  3. The foreman does not have to be the most experienced developer in everything. He can’t be. He may be most experienced in one or two or three fields. But not all. So if Alice is the most experienced DB developer, Fred the foreman should see that she helps other team members with SQL related stuff. He will still remind Alice about the procedures and code of the whole system.
  4. Sometimes the foreman will need to make decisions. Sometimes not everyone will agree. The foreman needs to know when to stop an argument and give the call.
  5. The foreman doesn’t need to have the sole responsibility for quality. But he’s the one that the management should approach. This is a tricky part. It’s hard to achieve. The team is responsible for the quality and delivery of the code. The foreman is responsible that the team achieve this. The foreman is responsible that everyone practices good coding (and everything that implies). The foreman is the one who needs to mentor team members who do not know how to bring quality code.
  6. The team is responsible for the architecture and design. As I mentioned before, the foreman will sometimes need to stop the discussion and make a decision. Each member of the team should have the opportunity to come forward with suggestions. Sometimes the foreman will bring the best ideas (after all, he’s supposed to be the most experienced), but more than once, other member will introduce the correct design. The foreman needs to listen.
  7. During the planning the team will estimate effort (E.g. will give points to user stories). Then, if the whole team is responsible for the design and architecture, the members will create the tasks with some time estimations. The foreman’s responsibility would be to see that everyone understands the priorities. He should be part of the team while designing and lead the design. If the team did not understand the priorities and didn’t bring quality code, it’s his responsibility. But also the team’s.
  8. The foreman should introduce new technologies to the team. The foreman should introduce the team coding practices. The foreman must pair with other members. Juniors and seniors. While pairing with juniors he actually mentors them. The foreman must see that the team does pair-programming with each other. The foreman is the one that establish code review habits in the team. As a foreman he can ask to review code even it was already done by another person. Sometimes it brings some antagonism, but as mentioned before, he has the responsibility and he’s the one that needs to answer the management.

Uncle Bob suggested a rather extreme approach.
Perhaps it suits in some extreme cases. He describes an open source project that actually work with several foremen: A Spectrum of Trust
On the other side, David Tanzer shows correctly why this approach may deteriorate the team spirit and trust.

I think that it’s possible to have a middle way.
I think that a team can have a foreman, a person who’s in charge. But still let everyone be involved. Have trust, spirit and motivation.

Linkedin Twitter facebook github

Agile Mindset During Programming

I’m Stuck

Recently I found myself in several situations where I just couldn’t write code. Or at least, “good code”
First, I had “writer’s block”. I just could not see what was going to be my next test to write.
I could not find the name for the class / interface I needed.
Second, I just couldn’t simplify my code. Each time I tried to change something (class / method) to a simpler construction, things got worse. Sometimes to break.

I was stuck.

The Tasks

Refactor to Patterns

One of the situation we had was to refactor a certain piece in the code.
This piece of code is the manual wiring part. We use DI pattern in ALL of our system, but due to some technical constraints, we must do the injection by hand. We can live with that.
So the refactor in the wiring part would have given us a nice option to change some of the implementation during boot.
Some of the concrete classes should be different than others based on some flags.
The design patterns we understood we would need were: Factory Method and Abstract Factory
The last remark is important to understand why I had those difficulties.
I will get to it later.

New Module

Another task was to create a new module that gets some input items, extract data from them, send it to a service, parse the response, modify the data accordingly and returns items with modified data.
While talking about it with a peer, we understood we needed several classes.
As always we wanted to have high quality code by using the known OOD principles wherever we could apply them.

So What Went Wrong?

In the case of refactoring the wiring part, I constantly tried to immediately create the end result of the abstract factory and the factory method that would call it.
There are a-lot of details in that wiring code. Some are common and some needed to be separated by the factory.
I just couldn’t find the correct places to extract to methods and then to other class.
Each time I had to move code from one location and dependency to another.
I couldn’t tell what exactly the factory’s signature and methods would be.

In the case of the new module, I knew that I want several classes. Each has one responsibility. I knew I want some level of abstraction and good encapsulation.
So I kept trying to create this great encapsulated abstract data structure. And the code kept being extremely complicated.
Important note: I always to test first approach.
Each time I tried to create a test for a certain behavior, it was really really complicated.

I stopped

Went to have a cup of coffey.
I went to read some unrelated stuff.
And I talked to one of my peers.
We both understood what we needed to do.
I went home…

And then it hit me

The problem I had was that I knew were I needed to go, but instead of taking small steps, I kept trying to take one big leap at once.
Which brings me to the analogy of Agile to good programming habits (and TDD would be one of them).

Agile and Programming Analogy

One of the advantages in Agile development that I really like is the small steps (iteration) we do in order to reach our goal.
Check the two pictures below.
One shows how we aim towards a far away goal and probably miss.
The other shows how we divide to iterations and aim incrementally.

Aiming From Far

Aiming From Far


Aiming Iterative and Incremental

Aiming Iterative and Incremental

Develop in Small Incremental Iterations

This is the moral of the story.
Even if you know exactly how the structure of the classes should look like.
Even if you know exactly which design pattern to use.
Even if you know what to do.
Even if you know exactly how the end result should look like.

Keep on using the methods and practices that brings you to the goal in the safest and fastest way.
Do small steps.
Test each step.
Increment the functionality of the code in small chucks.
TDD.
Pair.
Keep calm.

Refactor Big Leap

Refactor Big Leap


Refactor Small Steps

Refactor Small Steps


Request Validation and Filtering by Flags – Redesign and Refactoring

General
In the previous posts I started describing a validation / filtering framework we’re building.
While showing the code, I am trying to show clean code, test orientation and code evolution.
It has some agility in the process; We know the end requirements, but the exact details are evolving over time.

During the development we have changed the code to be more general as we saw some patterns in it.
The code evolved as the flow evolved as well.

The flow as we now understand it
Here’s a diagram of the flow we’ll implement

Request Sequence

Request Sequence

The Pattern
At each step of the sequence (validation, filtering, action), we recognized the same pattern:

  1. We have specific implementations (filters, validations)
  2. We have an engine that wraps up the specific implementations
  3. We need to map the implementations by flag, and upon request’s flags, select the appropriate implementations.
  4. We need to have a class that calls the mapper and then the engine

A diagram showing the pattern

The Pattern

The Pattern

Source Code
In order to show some of the evolution of the code, and how refactoring changed it, I added tags in GitHub after major changes.

Code Examples
Let’s see what came up from the mapper pattern.

public interface MapperByFlag<T> {
  List<T> getOperations(Request request);
}
public abstract class AbstractMapperByFlag<T> implements MapperByFlag<T> {
  private List<T> defaultOperations;
  private Map<String, List<T>> mapOfOperations;

  public AbstractMapperByFlag(List<T> defaultOperations, Map<String, List<T>> mapOfOperations) {
    this.defaultOperations = defaultOperations;
    this.mapOfOperations = mapOfOperations;
  }

  @Override
  public final List<T> getOperations(Request request) {
    Set<T> selectedFilters = Sets.newHashSet(defaultOperations);
    Set<String> flags = request.getFlags();
    for (String flag : flags) {
      if (mapOfOperations.containsKey(flag)) {
        selectedFilters.addAll(mapOfOperations.get(flag));
      }
    }
    return Lists.newArrayList(selectedFilters);
  }
}
  public RequestValidationByFlagMapper(List<RequestValidation> defaultValidations,
    map<String, List<RequestValidation>> mapOfValidations) {
    super(defaultValidations, mapOfValidations);
  }

  public ItemFiltersByFlagMapper(List<Filter> defaultFilters, Map<String, List<Filter>> mapOfFilters) {
    super(defaultFilters, mapOfFilters);
  }

I created a test for the abstract class, to show the flow itself.
The tests of the implementations use Java Reflection to verify that the correct injected parameters are sent to the super.
I am showing the imports here as well. To have some reference for the static imports, mockito and hamcrest packages and classes.

import static org.hamcrest.Matchers.containsInAnyOrder;
import static org.junit.Assert.assertThat;
import static org.mockito.Mockito.when;

import java.util.List;
import java.util.Map;

import org.eyal.requestvalidation.model.Request;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.runners.MockitoJUnitRunner;

import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Lists;
import com.google.common.collect.Sets;

@RunWith(MockitoJUnitRunner.class)
public class AbstractMapperByFlagTest {
	private final static String FLAG_1 = "flag 1";
	private final static String FLAG_2 = "flag 2";

	@Mock
	private Request request;

	private String defaultOperation1 = "defaultOperation1";
	private String defaultOperation2 = "defaultOperation2";
	private String mapOperation11 = "mapOperation11";
	private String mapOperation12 = "mapOperation12";
	private String mapOperation23 = "mapOperation23";

	private MapperByFlag<String> mapper;

	@Before
	public void setup() {
		List<String> defaults = Lists.newArrayList(defaultOperation1, defaultOperation2);
		Map<String, List<String>> mapped = ImmutableMap.<String, List<String>> builder()
		        .put(FLAG_1, Lists.newArrayList(mapOperation11, mapOperation12))
		        .put(FLAG_2, Lists.newArrayList(mapOperation23, mapOperation11)).build();
		mapper = new AbstractMapperByFlag<String>(defaults, mapped) {
		};
	}

	@Test
	public void whenRequestDoesNotHaveFlagsShouldReturnDefaultFiltersOnly() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet());

		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(defaultOperation1, defaultOperation2));
	}

	@Test
	public void whenRequestHasFlagsNotInMappingShouldReturnDefaultFiltersOnly() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet("un-mapped-flag"));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(defaultOperation1, defaultOperation2));
	}
	
	@Test
	public void whenRequestHasOneFlagShouldReturnWithDefaultAndMappedFilters() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet(FLAG_1));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(mapOperation12, defaultOperation1, mapOperation11, defaultOperation2));
	}
	
	@Test
	public void whenRequestHasTwoFlagsShouldReturnWithDefaultAndMappedFiltersWithoutDuplications() {
		when(request.getFlags()).thenReturn(Sets.<String> newHashSet(FLAG_1, FLAG_2));
		List<String> filters = mapper.getOperations(request);
		assertThat(filters, containsInAnyOrder(mapOperation12, defaultOperation1, mapOperation11, defaultOperation2, mapOperation23));
	}
}
@RunWith(MockitoJUnitRunner.class)
public class RequestValidationByFlagMapperTest {

	@Mock
	private List<RequestValidation> defaultValidations;
    
	@Mock
	private Map<String, List<RequestValidation>> mapOfValidations;

	@InjectMocks
	private RequestValidationByFlagMapper mapper;

	@SuppressWarnings("unchecked")
    @Test
	public void verifyParameters() throws NoSuchFieldException, SecurityException, IllegalArgumentException,
	        IllegalAccessException {
		Field defaultOperationsField = AbstractMapperByFlag.class.getDeclaredField("defaultOperations");
		defaultOperationsField.setAccessible(true);
        List<RequestValidation> actualFilters = (List<RequestValidation>) defaultOperationsField.get(mapper);
		assertThat(actualFilters, sameInstance(defaultValidations));

		Field mapOfFiltersField = AbstractMapperByFlag.class.getDeclaredField("mapOfOperations");
		mapOfFiltersField.setAccessible(true);
		Map<String, List<RequestValidation>> actualMapOfFilters = (Map<String, List<RequestValidation>>) mapOfFiltersField.get(mapper);
		assertThat(actualMapOfFilters, sameInstance(mapOfValidations));
	}
}

To Do
There are other classes that might be candidate for refactoring of some sort.
RequestFlowValidation and RequestFilter are similar.
And
RequestValidationsEngineImpl and FiltersEngine

To Do 2
Create a Matcher for the reflection part.

Code
As always, all the code can be found at:

A Tag for this post: all-components-in

Conclusion
The infrastructure is almost done.
During this time we are also implementing actual classes for the flow (validations, filters, actions).
These are not covered in the posts, nor in GitHub.
The infrastructure will be wired to a service we have using Spring.
This will be explained in future posts.