Ease at Work – A Talk by Kent Beck

The other day I was watching Kent Beck’s talk about Ease at Work.
It touched me deeply and I could really relate to what he described.
http://www.infoq.com/presentations/self-image

In order for me to remember, I decided to summarize the key points of the presentation.

The concept of Ease
There are many meanings to the word Ease, but in this case it’s not something like:
“I want to be rich and sit all day on the beach”.
Usually, us programmers would like to pick up hard and challenging work.

Here are the three points that relate to Ease in our case.

  1. State of comfort
  2. Freedom from worry, pain, or agitation
  3. Readiness in performance, facility

The pendulum inside us
People tend to have emotions, feelings and state of mind that change like a pendulum.
We sometimes feel at the top of the world. “I am the best programmer in the world and can solve anything”. This is one side.
Other times we feel deeply at the bottom: “I am soooo bad on what I do. I should stop programming and work on something completely different”.

There’s a huge range between the two sides of the pendulum.

So, being at ease would mean bring the amplitude of the swing down.
Decrease the range.

Once the pendulum swings at lower range, I will feel better on what I’m doing.

List of things that help being at ease
In his talk, Kent explains each point. I will just list the points.

  1. My work matters
  2. My code works
  3. I’m proud of my work
  4. Making public commitments
  5. Accountability (I am accountable)
  6. I interpret feedback
  7. I am a beginner – I don’t know everything
  8. Meditation
  9. I serve. No expectations of reward

These points are reminder for me. Whenever I don’t feel at ease, I should check what’s wrong. These points are a good starting point.

My personal addition
I thought of another two points:

  1. Clear vision of the future
  2. A good developer (well, any employee) wants to know that he/she has a career path.
    I will be more at ease if “I know where I’m going from here”

  3. Professional and flourishing environment
  4. A good developer will feel much more comfortable (at ease) in an environment of people he/she can learn from.
    The team should be diverse enough so everyone can learn from everyone else and mentor others.
    A good developer will feel at ease if he/she is surrounded with members who have the same passion as him (for technology, clean-code or whatever)

Linkedin Twitter facebook github

Why Abstraction is Really Important

Abstraction
Abstraction is one of the key elements of good software design.
It helps encapsulate behavior. It helps decouple software elements. It helps having more self-contained modules. And much more.

Abstraction makes the application extendable in much easier way. It makes refactoring much easier.
When developing with higher level of abstraction, you communicate the behavior and less the implementation.

General
In this post, I want to introduce a simple scenario that shows how, by choosing a simple solution, we can get into a situation of hard coupling and rigid design.

Then I will briefly describe how we can avoid situation like this.

Case study description
Let’s assume that we have a domain object called RawItem.

public class RawItem {
    private final String originator;
    private final String department;
    private final String division;
    private final Object[] moreParameters;
    
    public RawItem(String originator, String department, String division, Object... moreParameters) {
        this.originator = originator;
        this.department = department;
        this.division = division;
        this.moreParameters = moreParameters;
    }
}

The three first parameters represent the item’s key.
I.e. An item comes from an originator, a department and a division.
The “moreParameters” is just to emphasize the item has more parameters.

This triplet has two basic usages:
1. As key to store in the DB
2. As key in maps (key to RawItem)

Storing in DB based on the key
The DB tables are sharded in order to evenly distribute the items.
Sharding is done by a hash key modulo function.
This function works on a string.

Suppose we have N shards tables: (RAW_ITEM_REPOSITORY_00, RAW_ITEM_REPOSITORY_01,..,RAW_ITEM_REPOSITORY_NN),
then we’ll distribute the items based on some function and modulo:

String rawKey = originator + "_"  + department + "_" + division;
// func is String -> Integer function, N = # of shards
// Representation of the key is described below
int shard = func(key)%N;

Using the key in maps
The second usage for the triplet is mapping the items for fast lookup.
So, when NOT using abstraction, the maps will usually look like:

Map<String, RawItem> mapOfItems = new HashMap<>();
// Fill the map...

“Improving” the class
We see that we have common usage for the key as string, so we decide to put the string representation in the RawItem.

// new member
private final String key;

// in the constructor:
this.key = this.originator + "_" + this.department + "_"  + this.division;

// and a getter
public String getKey() {
  return key;
}

Assessment of the design
There are two flows here:
1. Coupling between the sharding distribution and the items’ mapping
2. The mapping key is strict. any change forces change in the key, which might introduce hard to find bugs

And then comes a new requirement
Up until now, the triplet: originator, department and division made up a key of an item.
But now, a new requirement comes in.
A division can have subdivision.
It means that, unlike before, we can have two different items from the same triplet. The items will differ by the subdivision attribute.

Difficult to change
Regarding the DB distribution, we’ll need to keep the concatenated key of the triplet.
We must keep the modulo function the same. So distribution will remain using the triplets, but the schema will change and hava ‘subdivision’ column as well.
We’ll change the queries to use the subdivision together with original key.

In regard to the mapping, we’ll need to do a massive refactoring and to pass an ItemKey (see below) instead of just String.

Abstraction of the key
Let’s create ItemKey

public class ItemKey {
    private final String originator;
    private final String department;
    private final String division;
    private final String subdivision;

    public ItemKey(String originator, String department, String division, String subdivision) {
        this.originator = originator;
        this.department = department;
        this.division = division;
        this.subdivision = subdivision;
    }

    public String asDistribution() {
        return this.originator + "_" + this.department + "_"  + this.division;
    }
}

And,

Map<ItemKey, RawItem> mapOfItems = new HashMap<>();
// Fill the map...
    // new constructor for RawItem
    public RawItem(ItemKey itemKey, Object... moreParameters) {
        // fill the fields
    }

Lesson Learned and conclusion
I wanted to show how a simple decision can really hurt.

And, how, by a small change, we made the key abstract.
In the future the key can have even more fields, but we’ll need to change only the inner implementation of it.
The logic and mapping usage should not be changed.

Regarding the change process,
I haven’t described how to do the refactoring, as it really depends on how the code looks like and how much is it tested.
In our case, some parts were easy, while others were really hard. The hard parts were around code that was looking deep in the implementation of the key (string) and the item.

This situation was real
We actually had this flow in our design.
Everything was fine for two years, until we had to change the key (add the subdivision).
Luckily all of our code is tested so we could see what breaks and fix it.
But it was painful.

There are two abstraction that we could have initially implement:
1. The more obvious is using a KEY class (as describe above). Even if it only has one String field
2. Any map usage need to be examined whether we’ll benefit by hiding it using abstraction

The second abstraction is harder to grasp and to fully understand and implement.

So,
do abstraction, tell a story and use the interfaces and don’t get into details while telling it.

Linkedin Twitter facebook github

PostgreSQL on Fedora

I bought (and started reading) the book Seven Databases in Seven Weeks in order to have better understanding of the different SQL / NoSQL paradigms. What are the pros and cons of each approach and play around with each type.

In this post I want to share the installation process I had with PostgreSQL on Fedora.
I will write a different post about the book itelf.

The Installation
I don’t know why, but installing PostgreSQL on the Fedora wasn’t as easy as expected.
It took me several tries to make it work.

I went over and over on the tutorials, read posts and questions with the same problems I had.
Eventually I made it work. I am not sure whether this is the correct way, but it’s good enough for me to work on it.

The Errors
During my attempts, I got some errors.

The most annoying one, was:

psql: could not connect to server: No such file or directory
 Is the server running locally and accepting
 connections on Unix domain socket "/var/lib/pgsql/.s.PGSQL.5432"?

I also got

FATAL:  could not create lock file "/var/run/postgresql/.s.PGSQL.5432.lock": Permission denied

Sometimes I got port 5432 already in use.

Took some time, but I managed to install it
I am not entirely sure how I made it work, but I’ll post here the actions I did.
(for my future self of-course).

Installation Instructions: http://www.postgresql.org/download/linux/redhat/

# install postgresql on the machine
sudo yum install postgresql-server

# fill the data directory (AKA init-db)
# REMEMBER - here it is: /var/lib/pgsql/data/
sudo postgresql-setup initdb

# Enable postgresql to be started on bootup:
# (I hope it works...)
sudo systemctl enable postgresql.service

The next steps were to run the service, login, create DB and start playing.
This was the part where I kept getting the errors describes above.

First step was to login as postgres user, which is created during installation.
You can’t start the server as sudo.
As I am (still) not a Linux expert, I had to figure out that w/o password for postgres, I’ll need to su from the root.

# Login
sudo -s
# password for root...

# switch to postgres
su - postgres

The next step was to start the service.
That was the painful issue. Although very satisfying after success.
After careful looking at the error message and some Googling, I decided to add the -D to the commands.
I didn’t try it before, as I thought it wasn’t necessary because I added PGDATA.
Eventually I am not using it.

So this is the command that worked for me:

pg_ctl start -D /var/lib/pgsql/data/

And now what…?

In my first attempts, whenever I tried to run a PG command (psql, createdb), I got the annoying error described above.
But now it worked !

As postgres user, I ran psql and I was logged in.
After that I could start working on the book.

Some Tips

  • Don’t forget to add semi-colon at the end of the commands 🙂
  • create extension tablefunc;
    create extension dict_xsyn;
    create extension fuzzystrmatch;
    create extension pg_trgm;
    create extension cube;
    
  • I didn’t have to modify any configuration file (I.e. pg_hba.conf).
  • README file /usr/share/doc/postgresql/README.rpm-dist
  • co

    Disclaimer
    This post was made out of notes that I wrote to myself during the hard installation.
    I am sure this is not the best (or maybe it is?)

    In the following posts I will share the reading progress of the book.

    I added a GitHub project with code I’m writing while reading the book.
    https://github.com/eyalgo/seven-dbs-in-seven-weeks

    (EDIT – I wrote this post at 2 AM, so I hoope there aren’t any major mistakes)

    Linkedin Twitter facebook github

    The Node Beginner Book – Book Review

    I have finished a few days ago a beginner’s book about Node.
    Name of the book: The Node Beginner Book , A comprehensive Node.js tutorial
    Author: Manuel Kiessling
    I liked it and the way it’s written so I would like to share it with you.

    I came across this book while looking for Node tutorials on the web.
    So first, here’s the site of the book: http://www.nodebeginner.org/
    It consists part of the first chapter of the book. As a tutorial and explanation.
    I started going over the tutorial and when I was done with this part I went on and bought the book.
    Another cool thing is that you can buy it from Leanpub as part of a bundle that has Hands-On Node.js as well. (I haven’t read this one yet).

    The book walks through a very basic Node server construction.
    Instead of just Hello World, we actually build a small server that can upload and show an image.

    It covers the basics, which I feel is enough to understand the concepts of Node and to give a really good kickoff for someone who is interested in Node.

    We start by installing Node and understand a little bit about JS.
    At the beginning there’s a clear explanation of the use case of what we’re going to develop.
    And, the importance of the architecture and how it should look like.

    Then, step by step, we build the server.js , the index.js, router.js and request handlers.
    I think that this is really important, as it emphasize a good approach of architecture design.
    The author emphasize how important is to separate concerns and create an organized code.

    Another really good aspect is the explanation of functional programming and how it helps in Node and HTTP server. Now, you’re not going to be a functional programmer after reading this book, but you will defiantly understand the concepts and get the idea.
    For me, it’s a really good thing. As a Java developer, I don’t use the functional paradigm, and it’s an important tool these days.
    (Yes, I know that there are many other functional languages. But that’s my point. By reading this book, I had a good opportunity to play with some functional paradigm.)

    The book gradually evolves the server creation.
    After we build the server.js, we start enhancing it.
    We build index.js file that holds mapping of routing.
    We build router.js that routes to the request handler.
    And requestHandlers.js to work with the different requests.

    Each part in the system evolves while reading the book.
    For example, at the beginning a function does not accept any parameter. Then it accepts some and later the parameters change.
    Every change is explained in the context and how it helps with aspects such as good architecture and design, asynchronous and other concepts.

    One of the examples, which I liked was why passing a callback function is important. The book shows nicely what happens if you run a slow operation (find in a file system), which is synchronous (not a callback function). Basically your whole server gets stuck.

    Towards the end, after we built simple yet flexible server, we learn some technical Node stuff. How to use external libraries with the package manager NPM.
    And by using it, we learn how to show an image, upload a file and rename it.

    At the end of the book we get a working Node server for upload and image and show it.
    It’s fun !

    I highly recommend it to anyone who wants to understand what Node is all about, but more than just the syntax.

    The author has another book, which I bought but haven’t read yet: The Node Craftsman Book.

    Happy Reading !

    Linkedin Twitter facebook github

    The Foreman Role in a Team

    There is a lot of discussion about the need for a foreman role in a software team.
    Robert C. Martin wrote about it in Where is the Foreman?

    I recently read a post by David Tanzer who disagrees with Uncle Bob’s point: We don’t need a foreman

    The way I see it, a foreman role is important, but perhaps not as extreme as Uncle Bob describes.
    Let’s start by quoting the foreman’s role as Uncle Bob describes (in construction and in software):

    The foreman on a construction site is the guy who is responsible for making sure all the workers do things right. He’s the guy with the tape-measure that goes around making sure all the walls are placed properly. He’s the guy who examines all the struts, joists, and beams to make sure they are installed correctly, and don’t have any significant defects. He’s the guy who counts the screws in the flooring to make sure it won’t squeak when you walk on it. He’s the guy — the guy who takes responsibility — the guy who makes sure everything is done right.

    What would the foreman do on software project? He’d do the same thing he does on a construction project. He’d make sure everything was done, done right, and done on time. He’d be the only one with commit rights. Everybody else would send him pull requests. He’d review each request in turn and reject those that didn’t have sufficient test coverage, or that had dirty code, or bad variable names, or functions that were too long. He’d reject those that, in his opinion, did not meet the level of quality he demands for the project.

    I think that the commit rights is crucial point for many critics of this idea.

    I would like to suggest a middle way.
    A foreman role, but also team’s responsibility.

    In real life the team is diverse. Some are seniors, some juniors. Some are expert in a specific field, others in another field. Some are expert in TDD, some in design, some in DB and SQL etc.

    Here are the key points as I see it:
    A diverse team.
    A foreman who’s responsible for the quality and delivery.
    The foreman will sometimes make the final call. Even if there is still a disagreement.
    The foreman is also responsible that everyone works at the standards he introduced (with the help of the team).
    Everyone can commit. Not just the foreman.
    Everyone can suggest anything.

    Here are some reasons why it might work.

    1. Everyone can commit and everyone can see others’ commits. This means that there is trust between the team members. It also gives each member more responsibility. The foreman in this case will still look for all the things that Uncle Bob describes. But when he sees something wrong (missing test? A code that is not well designed?) then he will approach the person who committed the code and discuss what went wrong. The foreman will have an opportunity to mentor other team members and pass his knowledge.
    2. The foreman can be the peer, with more responsibilities. If Fred notices that people make mistakes, he will discuss it. The foreman has more responsibility. He needs to know to listen. He needs to explain and not blame.
    3. The foreman does not have to be the most experienced developer in everything. He can’t be. He may be most experienced in one or two or three fields. But not all. So if Alice is the most experienced DB developer, Fred the foreman should see that she helps other team members with SQL related stuff. He will still remind Alice about the procedures and code of the whole system.
    4. Sometimes the foreman will need to make decisions. Sometimes not everyone will agree. The foreman needs to know when to stop an argument and give the call.
    5. The foreman doesn’t need to have the sole responsibility for quality. But he’s the one that the management should approach. This is a tricky part. It’s hard to achieve. The team is responsible for the quality and delivery of the code. The foreman is responsible that the team achieve this. The foreman is responsible that everyone practices good coding (and everything that implies). The foreman is the one who needs to mentor team members who do not know how to bring quality code.
    6. The team is responsible for the architecture and design. As I mentioned before, the foreman will sometimes need to stop the discussion and make a decision. Each member of the team should have the opportunity to come forward with suggestions. Sometimes the foreman will bring the best ideas (after all, he’s supposed to be the most experienced), but more than once, other member will introduce the correct design. The foreman needs to listen.
    7. During the planning the team will estimate effort (E.g. will give points to user stories). Then, if the whole team is responsible for the design and architecture, the members will create the tasks with some time estimations. The foreman’s responsibility would be to see that everyone understands the priorities. He should be part of the team while designing and lead the design. If the team did not understand the priorities and didn’t bring quality code, it’s his responsibility. But also the team’s.
    8. The foreman should introduce new technologies to the team. The foreman should introduce the team coding practices. The foreman must pair with other members. Juniors and seniors. While pairing with juniors he actually mentors them. The foreman must see that the team does pair-programming with each other. The foreman is the one that establish code review habits in the team. As a foreman he can ask to review code even it was already done by another person. Sometimes it brings some antagonism, but as mentioned before, he has the responsibility and he’s the one that needs to answer the management.

    Uncle Bob suggested a rather extreme approach.
    Perhaps it suits in some extreme cases. He describes an open source project that actually work with several foremen: A Spectrum of Trust
    On the other side, David Tanzer shows correctly why this approach may deteriorate the team spirit and trust.

    I think that it’s possible to have a middle way.
    I think that a team can have a foreman, a person who’s in charge. But still let everyone be involved. Have trust, spirit and motivation.

    Linkedin Twitter facebook github

    Installing Fedora and Solving a Wifi Issue

    I am writing this post as a future reminder for myself.

    I decided to install a Linux OS on an old laptop. And I didn’t want a Debian system (I am using Ubuntu at the office). So I went to Fedora. I just want to get my hands more dirty on Linux.

    For installation I used Linux Live USB Creator

    I picked up the latest Fedora installation (V. 20 with KDE) and installed it in my USB.

    After that I rebooted my laptop with the USB and installed the OS. Really simple I must say.

    The problem now was that the OS could not see the wireless card.

    The laptop is Dell Inspiron. The wifi card is Broadcom.

    In order to check which wifi card run either one of:

    • lspci
    • lspci | grep -i Network

    So here’s what I needed to do:

    1. Install Fusion RPM, free and nonfree from http://rpmfusion.org/Configuration
    2. Run the following command su -c ‘yum install broadcom-wl’
    3. Reboot

    And I had Fedora KDE V20 with Wifi !

    A small note about centOS, I tried install it before but just could not fix the Wifi issue.

    Linkedin Twitter facebook github

    Agile Mindset During Programming

    I’m Stuck

    Recently I found myself in several situations where I just couldn’t write code. Or at least, “good code”
    First, I had “writer’s block”. I just could not see what was going to be my next test to write.
    I could not find the name for the class / interface I needed.
    Second, I just couldn’t simplify my code. Each time I tried to change something (class / method) to a simpler construction, things got worse. Sometimes to break.

    I was stuck.

    The Tasks

    Refactor to Patterns

    One of the situation we had was to refactor a certain piece in the code.
    This piece of code is the manual wiring part. We use DI pattern in ALL of our system, but due to some technical constraints, we must do the injection by hand. We can live with that.
    So the refactor in the wiring part would have given us a nice option to change some of the implementation during boot.
    Some of the concrete classes should be different than others based on some flags.
    The design patterns we understood we would need were: Factory Method and Abstract Factory
    The last remark is important to understand why I had those difficulties.
    I will get to it later.

    New Module

    Another task was to create a new module that gets some input items, extract data from them, send it to a service, parse the response, modify the data accordingly and returns items with modified data.
    While talking about it with a peer, we understood we needed several classes.
    As always we wanted to have high quality code by using the known OOD principles wherever we could apply them.

    So What Went Wrong?

    In the case of refactoring the wiring part, I constantly tried to immediately create the end result of the abstract factory and the factory method that would call it.
    There are a-lot of details in that wiring code. Some are common and some needed to be separated by the factory.
    I just couldn’t find the correct places to extract to methods and then to other class.
    Each time I had to move code from one location and dependency to another.
    I couldn’t tell what exactly the factory’s signature and methods would be.

    In the case of the new module, I knew that I want several classes. Each has one responsibility. I knew I want some level of abstraction and good encapsulation.
    So I kept trying to create this great encapsulated abstract data structure. And the code kept being extremely complicated.
    Important note: I always to test first approach.
    Each time I tried to create a test for a certain behavior, it was really really complicated.

    I stopped

    Went to have a cup of coffey.
    I went to read some unrelated stuff.
    And I talked to one of my peers.
    We both understood what we needed to do.
    I went home…

    And then it hit me

    The problem I had was that I knew were I needed to go, but instead of taking small steps, I kept trying to take one big leap at once.
    Which brings me to the analogy of Agile to good programming habits (and TDD would be one of them).

    Agile and Programming Analogy

    One of the advantages in Agile development that I really like is the small steps (iteration) we do in order to reach our goal.
    Check the two pictures below.
    One shows how we aim towards a far away goal and probably miss.
    The other shows how we divide to iterations and aim incrementally.

    Aiming From Far

    Aiming From Far


    Aiming Iterative and Incremental

    Aiming Iterative and Incremental

    Develop in Small Incremental Iterations

    This is the moral of the story.
    Even if you know exactly how the structure of the classes should look like.
    Even if you know exactly which design pattern to use.
    Even if you know what to do.
    Even if you know exactly how the end result should look like.

    Keep on using the methods and practices that brings you to the goal in the safest and fastest way.
    Do small steps.
    Test each step.
    Increment the functionality of the code in small chucks.
    TDD.
    Pair.
    Keep calm.

    Refactor Big Leap

    Refactor Big Leap


    Refactor Small Steps

    Refactor Small Steps


    Law of Demeter

    Reduce coupling and improve encapsulation…

    General
    In this post I want to go over Law of Demeter (LoD).
    I find this topic an extremely important for having the code clean, well-designed and maintainable.

    In my experience, seeing it broken is a huge smell for bad design.
    Following the law, or refactoring based on it, leads to much improved, readable and more maintainable code.

    So what is Law of Demeter?
    I will start by mentioning the 4 basic rules:

    Law of Demeter says that a method M of object O can access / invoke methods of:

    1. O itself
    2. M’s input arguments
    3. Any object created in M
    4. O’s parameters / dependencies

    These are fairly simple rules.

    Let’s put this in other words:
    Each unit (method) should have limited knowledge about other units.

    Metaphors
    The most common one is: Don’t talk to strangers

    How about this:
    Suppose I buy something at 7-11.
    When I need to pay, will I give my wallet to the clerk so she will open it and get the money out?
    Or will I give her the money directly?

    How about this metaphor:
    When you take your dog out for a walk, do you tell it to walk or its legs?

    Why do we want to follow this rule?

    • We can change a class without having a ripple effect of changing many others.
    • We can change called methods without changing anything else.
    • Using LoD makes our tests much easier to construct. We don’t need to write so many ‘when‘ for mocks that return and return and return.
    • It improves the encapsulation and abstraction (I’ll show in the example below).
      But basically, we hide “how things work”.
    • It makes our code less coupled. A caller method is coupled only in one object, and not all of the inner dependencies.
    • It will usually model better the real world.
      Take as an example the wallet and payment.

    Counting Dots?
    Although usually many dots imply LoD violation, sometimes it doesn’t make sense to “merge the dots”.
    Does:
    getEmployee().getChildren().getBirthdays()
    suggest that we do something like:
    getEmployeeChildrenBirthdays() ?
    I am not entirely sure.

    Too Many Wrapper Classes
    This is another outcome of trying to avoid LoD.
    In this particular situation, I strongly believe that it’s another design smell which should be taken care of.

    As always, we must have common sense while coding, cleaning and / or refactoring.

    Example
    Suppose we have a class: Item
    The item can hold multiple attributes.
    Each attribute has a name and values (it’s a multiple value attribute)

    The simplest implementations would be using Map.

    public class Item {
    private final Map<String, Set<String>> attributes;
    public Item(Map<String, Set<String>> attributes) {
    this.attributes = attributes;
    }
    public Map<String, Set<String>> getAttributes() {
    return attributes;
    }
    }

    Let’s have a class ItemsSaver that uses the Item and attributes:
    (please ignore the unstructured methods. This is an example for LoD, not SRP 🙂 )

    public class ItemSaver {
    private String valueToSave;
    public ItemSaver(String valueToSave) {
    this.valueToSave = valueToSave;
    }
    public void doSomething(String attributeName, Item item) {
    Set<String> attributeValues = item.getAttributes().get(attributeName);
    for (String value : attributeValues) {
    if (value.equals(valueToSave)) {
    doSomethingElse();
    }
    }
    }
    private void doSomethingElse() {
    }
    }

    Suppose I know that it’s a single value (from the context of the application).
    And I want to take it. Then the code would look like:
    Set<String> attributeValues = item.getAttributes().get(attributeName);
    String singleValue = attributeValues.iterator().next();
    // String singleValue = item.getAttributes().get(attributeName).iterator().next();

    I think that it is clear to see that we’re having a problem.
    Wherever we use the attributes of the Item, we know how it works. We know the inner implementation of it.
    It also makes our test much harder to maintain.

    Let’s see an example of a test using mock (Mockito):
    You can see imagine how much effort it should take to change and maintain it.

    Item item = mock(Item.class);
    Map<String, Set<String>> attributes = mock(Map.class);
    Set<String> values = mock(Set.class);
    Iterator<String> iterator = mock(Iterator.class);
    when(iterator.next()).thenReturn("the single value");
    when(values.iterator()).thenReturn(iterator);
    when(attributes.containsKey("the-key")).thenReturn(true);
    when(attributes.get("the-key")).thenReturn(values);
    when(item.getAttributes()).thenReturn(attributes);

    We can use real Item instead of mocking, but we’ll still need to create lots of pre-test data.

    Let’s recap:

    • We exposed the inner implementation of how Item holds Attributes
    • In order to use attributes, we needed to ask the item and then to ask for inner objects (the values).
    • If we ever want to change the attributes implementation, we will need to make changes in the classes that use Item and the attributes. Probably a-lot classes.
    • Constructing the test is tedious, cumbersome, error-prone and lots of maintenance.

    Improvement
    The first improvement would be to ask let Item delegate the attributes.

    public class Item {
    private final Map<String, Set<String>> attributes;
    public Item(Map<String, Set<String>> attributes) {
    this.attributes = attributes;
    }
    public boolean attributeExists(String attributeName) {
    return attributes.containsKey(attributeName);
    }
    public Set<String> values(String attributeName) {
    return attributes.get(attributeName);
    }
    public String getSingleValue(String attributeName) {
    return values(attributeName).iterator().next();
    }
    }

    And the test becomes much simpler.
    Item item = mock(Item.class);
    when(item.getSingleValue("the-key")).thenReturn("the single value");

    We are (almost) hiding totally the implementation of attributes from other classes.
    The client classes are not aware of the implementation expect two cases:

    1. Item still knows how attributes are built.
    2. The class that creates Item (whichever it is), also knows the implementation of attributes.

    The two points above mean that if we change the implementation of Attributes (something else than a map), at least two other classes will need to be change. This is a great example for High Coupling.

    The Next Step Improvement
    The solution above will sometimes (usually?) be enough.
    As pragmatic programmers, we need to know when to stop.
    However, let’s see how we can even improve the first solution.

    Create a class Attributes:

    public class Attributes {
    private final Map<String, Set<String>> attributes;
    public Attributes() {
    this.attributes = new HashMap<>();
    }
    public boolean attributeExists(String attributeName) {
    return attributes.containsKey(attributeName);
    }
    public Set<String> values(String attributeName) {
    return attributes.get(attributeName);
    }
    public String getSingleValue(String attributeName) {
    return values(attributeName).iterator().next();
    }
    public Attributes addAttribute(String attributeName, Collection<String> values) {
    this.attributes.put(attributeName, new HashSet<>(values));
    return this;
    }
    }
    view raw Attributes.java hosted with ❤ by GitHub

    And the Item that uses it:
    public class Item {
    private final Attributes attributes;
    public Item(Attributes attributes) {
    this.attributes = attributes;
    }
    public boolean attributeExists(String attributeName) {
    return attributes.attributeExists(attributeName);
    }
    public Set<String> values(String attributeName) {
    return attributes.values(attributeName);
    }
    public String getSingleValue(String attributeName) {
    return attributes.getSingleValue(attributeName);
    }
    }

    (Did you noticed? The implementation of attributes inside item was changed, but the test did not need to. This is thanks to the small change of delegation.)

    In the second solution we improved the encapsulation of Attributes.
    Now even Item does not know how it works.
    We can change the implementation of Attributes without touching any other class.
    We can make different implementations of Attributes:
    – An implementation that holds a Set of values (as in the example).
    – An implementation that holds a List of values.
    – A totally different data structure that we can think of.

    As long as all of our tests pass, we can be sure that everything is OK.

    What did we get?

    • The code is much more maintainable.
    • Tests are simpler and more maintainable.
    • It is much more flexible. We can change implementation of Attributes (map, set, list, whatever we choose).
    • Changes in Attribute does not affect any other part of the code. Not even those who directly uses it.
    • Modularization and code reuse. We can use Attributes class in other places in the code.

    Project Migration from Sourceforge to GitHub

    I have an old project, named JVDrums, which was located at Sourceforge.
    http://sourceforge.net/projects/jvdrums/

    About JVDrums
    It was written around 6 years ago (This is the date as shown in the commit history: 2008-05-09).

    The project is a MIDI client for Roland Electronic Drums for uploading and backing up drumsets.
    It was an early attempt to use testing during development (an early TDD attempt).

    I used TestNG for the testing.

    Initially I created it for my own model, which is Roland TD-12. I needed a small app for uploading drumsets which other users created and sent me.
    When I published it in some forums I was asked to develop the client for other models (TD-6, TD-10).

    That was cool, as I didn’t have the real module (each model has it’s own module), so how could I develop and test for it?

    Each module has MIDI specification, so I downloaded them from Roland’s website.
    Then, I created tests that simulated the structure of the MIDI file and I could hack the upload, download and editing.

    I also created a basic UI interface using Java-Swing.

    Migration
    All i needed to do was following the instructions from:
    https://github.com/nirvdrum/svn2git#readme

    And here we go: https://github.com/eyalgo/jvdrums

    So if you need to migrate from Sourceforge to GitHub just follow that link.

    Using Reflection for Testing

    I am working on a presentation about the ‘Single Responsibility Principle’, based on my previous post.
    It take most of my time.

    In the meantime, I want to share a sample code of how I use to test inner fields in my classes.
    I am doing it for a special case of testing, which is more of an integration test.
    In the standard unit testing of the dependent class, I am using mocks of the dependencies.

    The Facts

    1. All of the fields (and dependencies in our classes are private
    2. The class do not have getters for its dependencies
    3. We wire things up using Spring (XML context)
    4. I wan to verify that dependency interface A is wired correctly to dependent class B

    One approach would be to wire everything and then run some kind of integration test of the logic.
    I don’t want to do this. It will make the test hard to maintain.

    The other approach is to check wiring directly.
    And for that I am using reflection.

    Below is a sample code of the testing method, and the usage.
    Notice how I am catching the exception and throws a RuntimeException in case there is a problem.
    This way, I have cleaner tested code.


    // Somewhere in a different utility class for testing
    @SuppressWarnings("unchecked")
    public static <T> T realObjectFromField(Class<?> clazz, String fieldName, Object object) {
    Field declaredField = accessibleField(clazz, fieldName);
    try {
    return (T) declaredField.get(object);
    } catch (IllegalArgumentException | IllegalAccessException e) {
    throw new RuntimeException(e);
    }
    }
    private static Field accessibleField(Class<?> clazz, String fieldName) {
    try {
    Field declaredField = clazz.getDeclaredField(fieldName);
    declaredField.setAccessible(true);
    return declaredField;
    } catch (NoSuchFieldException | SecurityException e) {
    throw new RuntimeException(e);
    }
    }
    // This is how we use it in a test method
    import static mypackage.ReflectionUtils.realObjectFromField;
    ItemFiltersMapperByFlag mapper = realObjectFromField(ItemsFilterExecutor.class, "filtersMapper", filterExecutor);
    assertNotNull("mapper is null. Check wiring", mapper);

    The Single Responsibility Principle

    Introduction
    In this post I would like to cover the Single Responsibility Principle (SRP).
    I think that this is the basis of any clean and well designed system.

    What is SRP?
    The term was introduced by Robert C. Martin.
    It is the ‘S’ from the SOLID principles, which are the basis for OOD.
    http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)
    Here’s the PDF paper for SRP by Robert C. Martin https://docs.google.com/file/d/0ByOwmqah_nuGNHEtcU5OekdDMkk/

    From Wikipedia:

    …In object-oriented programming, the single responsibility principle states that every class should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility….

    From Clean Code:

    A class or module should have one, and only one, reason to change.

    So if a class (or module) needs to be modified for more than one reason, it does more than one thing. I.e. has more than one responsibility.

    Why SRP?

    • Organize the code
      Let’s imagine a car mechanic who owns a repair shop.
      He has many many tools to work with. The tools are divided into types; Pliers, Screw-Drivers (Phillips / Blade), Hammers, Wrenches (Tubing / Hex) and many more.

      How would it be easier to organize the tools?
      Few drawers with different types in each one of them?
      Or, many small drawers, each containing a specific type?

      Now, imagine the drawer as the module. This is why many small modules (classes) are more organized then few large ones.

    • Less fragile
      When a class has more than one reason to be changed, it is more fragile.
      A change in one location might lead to some unexpected behavior in totally other places.
    • Low Coupling
      More responsibilities lead to higher coupling.
      The couplings are the responsibilities.
      Higher coupling leads to more dependencies, which is harder to maintain.
    • Code Changes
      Refactoring is much easier for a single responsibility module.
      If you want to get the shotgun effect, let your classes have more responsibilities.
    • Maintainability
      It’s obvious that it is much easier to maintain a small single purpose class, then a big monolithic one.
    • Testability
      A test class for a ‘one purpose class’ will have less test cases (branches).
      If a class has one purpose it will usually have less dependencies, thus less mocking and test preparing.
      The “self documentation by tests” becomes much clearer.
    • Easier Debugging
      Since I started doing TDD and test-first approach, I hardly debug. Really.
      But, there come times when I must debug in order to understand what’s going on.
      In a single responsibility class, finding the bug or the cause of the problem, becomes a much easier task.

    What needs to have single responsibility?
    Each part of the system.

    • The methods
    • The classes
    • The packages
    • The modules

    How to Recognize a Break of the SRP?

    • Class Has Too Many Dependencies
      A constructor with too many input parameters implies many dependencies (hopefully you do inject dependencies).

      Another way too see many dependencies is by the test class.
      If you need to mock too many objects, it usually means breaking the SRP.

    • Method Has Too Many Parameters
      Same as the class’s smell. Think of the method’s parameters as dependencies.
    • The Test Class Becomes Too Complicated
      If the test has too many variants, it might suggest that the class has too many responsibilities.
      It might suggest that some methods do too much.
    • Class / Method is Long
      If a method is long, it might suggest it does too much.
      Same goes for a class.
      My rule of thumb is that a class should not exceed 200-250 LOC. Imports included 😉
    • Descriptive Naming
      If you need to describe what your class / method / package is using with the AND world, it probably breaks the SRP.
    • Class With Low Cohesion
      Cohesion is an important topic of its own and should have its own post.
      But Cohesion and SRP are closely related and it is important to mention it here.
      In general, if a class (or module) is not cohesive, it probably breaks the SRP.

      A hint for a non-cohesive class:
      The class has two fields. One field is used by some methods. The other field is used by the other methods.

    • Change In One Place Breaks Another
      If a change in the code to add a new feature or simply refactor broke a test which seems unrelated, it might suggest a breaking the SRP.
    • Shotgun Effect
      If a small change makes a big ripple in your code. If you need to change many locations it might suggest, among other smells, that the SRP is broken.
    • Unable to Encapsulate a Module
      I will explain using Spring, but the concept is important (not the implementation).
      Suppose you use the @Configuration or XML configuration.
      If you can’t encapsulate the beans in that configuration, it should give you a hint of too much responsibility.
      The Configuration should hide any inner bean and expose minimal interfaces.
      If you need to change the Configuration due to more than one reason, then, well, you know…

    How to make the design compliant with the Single Responsibility Principle
    The suggestions below can apply to other topics of the SOLID principles.
    They are also good for any Clean Code suggestion.
    But here they are aimed for the Single Responsibility Principle.

    • Awareness
      This is a general suggestion for clean code.
      We need to be aware of our code. We need to take care.
      As for SRP, we need to try and catch as early as we can a class that is responsible for too much.
      We need to always look for a ‘too big method’.
    • Testable Code
      Write your code in a way that everything can be tested.
      Then, you will surly want that your tests be simple and descriptive.
    • TDD
      (I am not going to add anything here)
    • Code Coverage Metrics
      Sometimes, when a class does too much, it won’t have 100% coverage at first shot.
      Check the code quality metrics.
    • Refactoring and Design Patterns
      For SRP, we’ll mostly do extract-method, extract-class, move-method.
      We’ll use composition and strategy instead of conditionals.
    • Clear Modularization of the System
      When using a DI injector (Spring), I think that Configuration class (or XML) can pinpoint the modules design. And modules’ single responsibility.
      I prefer to have several small to medium size of configuration files (XML or Java) than having one big file / class.
      It helps see the responsibility of the module and easier to maintain.
      I think that the configuration approach of injection has an advantage of annotation approach. Simply because the Configuration approach put the modules in the spotlight.

    Conclusion
    As I mentioned in the beginning of this post, I think that Single-Responsibility-Principle is the basis of a good design.
    If you have this principle in your mind while designing and developing, you will have a simpler more readable code.
    Better design will be followed.

    One Final Note
    As always, one needs to be careful on how to apply practices, code and design.
    Sometimes we might do over-work and make simple things over complex.
    So a common sense must be applied at any refactor and change.

    Spring Context with Properties, Collections and Maps

    In this post I want to show how I added the XML context file to the Spring application.
    The second aspect I will show will be the usage of the properties file for the external constants values.

    All of the code is located at: https://github.com/eyalgo/request-validation (as previous posts).

    I decided to do all the wiring using XML file and not annotation for several reasons:

    1. I am simulating a situation were the framework is not part of the codebase (it’s an external library) and it is not annotated by anything
    2. I want to emphasize the modularity of the system using several XML files (yes. I know it can be done using @Configuration)
    3. Although I know Spring, I still feel more comfortable having more control using the XML files
    4. For Spring newbies, I think they should start using XML configuration files and only when grasp the idea and technology, should start using annotation

    About the modularization and how the sample app is constructed, I will expand in later post.

    Let’s start with the properties file:
    And here’s part of the properties file:

    flag.external = EXTERNAL
    flag.internal = INTERNAL
    flag.even = EVEN
    flag.odd = ODD
    
    validation.acceptedIds=flow1,flow2,flow3,flow4,flow5
    
    filter.external.name.max = 10
    filter.external.name.min = 4
    
    filter.internal.name.max = 6
    filter.internal.name.min = 2
    

    Properties File Location
    We also need to tell Spring the location of our property file.
    You can use PropertyPlaceholderConfigurer , or you can use the context element as shown here:

    <context:property-placeholder location="classpath:spring/flow.properties" />
    

    Simple Bean Example
    This is the very basic example of how to create a bean

    <bean id="evenIdFilter"
      class="org.eyal.requestvalidation.flow.example.flow.itemsfilter.filters.EvenIdFilter">
    </bean>
    

    Using Simple Property
    Suppose you want to add a property attribute to your bean.
    I always use constructor injection, so I will use constructor-arg in the bean declaration.

    <bean id="longNameExternalFilter"
        class="org.eyal.requestvalidation.flow.example.flow.itemsfilter.filters.NameTooLongFilter">
        <constructor-arg value="${filter.external.name.max}" />
    </bean>
    

    List Example
    Suppose you have a class that gets a list (or set) of objects (either another bean class, or just Strings).
    You can add it as a parameter in the constructor-arg, but I prefer to create the list outside the bean declaration and refer to it in the bean.
    Here’s how:

    <util:list id="defaultFilters">
      <ref bean="emptyNameFilter" />
      <ref bean="someOtherBean" />
    </util:list>
    

    And

    <bean id="itemFiltersMapperByFlag"
      class="org.eyal.requestvalidation.flow.itemsfilter.ItemFiltersMapperByFlag">
       <constructor-arg ref="defaultFilters" />
       <constructor-arg ref="filtersByFlag" />
    </bean>
    

    Collection of Values in the Properties File
    What if I want to set a list (set) of values to pass a bean.
    Not a list of beans as described above.
    The in the properties file I will put:
    validation.acceptedIds=flow1,flow2,flow3,flow4,flow5

    And in bean:

    <bean id="acceptedIdsValidation"
      class="org.eyal.requestvalidation.flow.example.flow.requestvalidation.validations.AcceptedIdsValidation">
      <constructor-arg value="#{'${validation.acceptedIds}'.split(',')}" />
    </bean>
    

    See how I used Spring Expression Language (SpEL)

    Map Injection Example
    Here’s a sample of an empty map creation:

    <util:map id="validationsByFlag">
    </util:map>
    

    Here’s a map with some entries.
    See how the keys are also set from the properties file.

    <util:map id="filtersByFlag">
      <entry key="${flag.external}" value-ref="filtersForExternal" />
      <entry key="${flag.internal}" value-ref="filtersForInternal" />
      <entry key="${flag.even}" value-ref="filtersForEven" />
      <entry key="${flag.odd}" value-ref="filtersForOdd" />
    </util:map>
    


    In the map example above we have keys as Strings from the properties file.
    The values are reference to other beans as described above.

    The usage would be the same as for list:

    <bean id="itemFiltersMapperByFlag"
      class="org.eyal.requestvalidation.flow.itemsfilter.ItemFiltersMapperByFlag">
       <constructor-arg ref="defaultFilters" />
       <constructor-arg ref="filtersByFlag" />
    </bean>
    

    Conclusion
    In this post I showed some basic examples of Spring configuration using XML and properties file.
    I strongly believe that until the team is not fully understand the way Spring works, everyone should stick with this kind of configuration.
    If you find that you start to get files, which are too big, then you may want to check your design. Annotations will just hide your poorly design system.

    Spring and Maven Configuration

    This is the first post of a series of posts demonstrating how we to use Spring in an application.
    In the series I will show some howtos of technical aspects (context file, properties, etc.).
    And I will also show some design aspects and test approach.

    In this post I will simply show how to integrate Spring using Maven.

    The basic dependency would be the context. Using Maven dependencies, spring-core will be in the project as well.

    <dependency>
      <groupId>org.springframework</groupId>
      <artifactId>spring-context</artifactId>
      <version>${spring.version}</version>
    </dependency>
    

    If we want to use annotation such as @Inject which comes from Java JSR, we’ll add the following dependency:

    <dependency>
      <groupId>javax.inject</groupId>
      <artifactId>javax.inject</artifactId>
      <version>1</version>
    </dependency>
    

    And in order to be able to test using Spring, here’s what we’ll need (in here, the scope is test):

    <dependency>
      <groupId>org.springframework</groupId>
      <artifactId>spring-test</artifactId>
      <version>${spring.version}</version>
      <scope>test</scope>
    </dependency>
    

    You can see that I didn’t add spring-core as it comes with the context / test dependencies.

    You can find the code at: https://github.com/eyalgo/request-validation

    Some notes about the code.

    I added the Spring code, context and the Spring’s Maven dependencies to the test environment.
    This is on purpose.
    I want to emphasize the separation of the validation-filter framework to the usage and wiring of an application.

    In real life, you might have an external library that you’ll want to use it in a Spring injected application.
    So the test environment in the code simulates the application and the src is the “external library”.

    Request Validation and Filtering by Flags – Redesign and Refactoring

    General
    In the previous posts I started describing a validation / filtering framework we’re building.
    While showing the code, I am trying to show clean code, test orientation and code evolution.
    It has some agility in the process; We know the end requirements, but the exact details are evolving over time.

    During the development we have changed the code to be more general as we saw some patterns in it.
    The code evolved as the flow evolved as well.

    The flow as we now understand it
    Here’s a diagram of the flow we’ll implement

    Request Sequence

    Request Sequence

    The Pattern
    At each step of the sequence (validation, filtering, action), we recognized the same pattern:

    1. We have specific implementations (filters, validations)
    2. We have an engine that wraps up the specific implementations
    3. We need to map the implementations by flag, and upon request’s flags, select the appropriate implementations.
    4. We need to have a class that calls the mapper and then the engine

    A diagram showing the pattern

    The Pattern

    The Pattern

    Source Code
    In order to show some of the evolution of the code, and how refactoring changed it, I added tags in GitHub after major changes.

    Code Examples
    Let’s see what came up from the mapper pattern.

    public interface MapperByFlag<T> {
      List<T> getOperations(Request request);
    }
    
    public abstract class AbstractMapperByFlag<T> implements MapperByFlag<T> {
      private List<T> defaultOperations;
      private Map<String, List<T>> mapOfOperations;
    
      public AbstractMapperByFlag(List<T> defaultOperations, Map<String, List<T>> mapOfOperations) {
        this.defaultOperations = defaultOperations;
        this.mapOfOperations = mapOfOperations;
      }
    
      @Override
      public final List<T> getOperations(Request request) {
        Set<T> selectedFilters = Sets.newHashSet(defaultOperations);
        Set<String> flags = request.getFlags();
        for (String flag : flags) {
          if (mapOfOperations.containsKey(flag)) {
            selectedFilters.addAll(mapOfOperations.get(flag));
          }
        }
        return Lists.newArrayList(selectedFilters);
      }
    }
    
      public RequestValidationByFlagMapper(List<RequestValidation> defaultValidations,
        map<String, List<RequestValidation>> mapOfValidations) {
        super(defaultValidations, mapOfValidations);
      }
    
      public ItemFiltersByFlagMapper(List<Filter> defaultFilters, Map<String, List<Filter>> mapOfFilters) {
        super(defaultFilters, mapOfFilters);
      }
    

    I created a test for the abstract class, to show the flow itself.
    The tests of the implementations use Java Reflection to verify that the correct injected parameters are sent to the super.
    I am showing the imports here as well. To have some reference for the static imports, mockito and hamcrest packages and classes.

    import static org.hamcrest.Matchers.containsInAnyOrder;
    import static org.junit.Assert.assertThat;
    import static org.mockito.Mockito.when;
    
    import java.util.List;
    import java.util.Map;
    
    import org.eyal.requestvalidation.model.Request;
    import org.junit.Before;
    import org.junit.Test;
    import org.junit.runner.RunWith;
    import org.mockito.Mock;
    import org.mockito.runners.MockitoJUnitRunner;
    
    import com.google.common.collect.ImmutableMap;
    import com.google.common.collect.Lists;
    import com.google.common.collect.Sets;
    
    @RunWith(MockitoJUnitRunner.class)
    public class AbstractMapperByFlagTest {
    	private final static String FLAG_1 = "flag 1";
    	private final static String FLAG_2 = "flag 2";
    
    	@Mock
    	private Request request;
    
    	private String defaultOperation1 = "defaultOperation1";
    	private String defaultOperation2 = "defaultOperation2";
    	private String mapOperation11 = "mapOperation11";
    	private String mapOperation12 = "mapOperation12";
    	private String mapOperation23 = "mapOperation23";
    
    	private MapperByFlag<String> mapper;
    
    	@Before
    	public void setup() {
    		List<String> defaults = Lists.newArrayList(defaultOperation1, defaultOperation2);
    		Map<String, List<String>> mapped = ImmutableMap.<String, List<String>> builder()
    		        .put(FLAG_1, Lists.newArrayList(mapOperation11, mapOperation12))
    		        .put(FLAG_2, Lists.newArrayList(mapOperation23, mapOperation11)).build();
    		mapper = new AbstractMapperByFlag<String>(defaults, mapped) {
    		};
    	}
    
    	@Test
    	public void whenRequestDoesNotHaveFlagsShouldReturnDefaultFiltersOnly() {
    		when(request.getFlags()).thenReturn(Sets.<String> newHashSet());
    
    		List<String> filters = mapper.getOperations(request);
    		assertThat(filters, containsInAnyOrder(defaultOperation1, defaultOperation2));
    	}
    
    	@Test
    	public void whenRequestHasFlagsNotInMappingShouldReturnDefaultFiltersOnly() {
    		when(request.getFlags()).thenReturn(Sets.<String> newHashSet("un-mapped-flag"));
    		List<String> filters = mapper.getOperations(request);
    		assertThat(filters, containsInAnyOrder(defaultOperation1, defaultOperation2));
    	}
    	
    	@Test
    	public void whenRequestHasOneFlagShouldReturnWithDefaultAndMappedFilters() {
    		when(request.getFlags()).thenReturn(Sets.<String> newHashSet(FLAG_1));
    		List<String> filters = mapper.getOperations(request);
    		assertThat(filters, containsInAnyOrder(mapOperation12, defaultOperation1, mapOperation11, defaultOperation2));
    	}
    	
    	@Test
    	public void whenRequestHasTwoFlagsShouldReturnWithDefaultAndMappedFiltersWithoutDuplications() {
    		when(request.getFlags()).thenReturn(Sets.<String> newHashSet(FLAG_1, FLAG_2));
    		List<String> filters = mapper.getOperations(request);
    		assertThat(filters, containsInAnyOrder(mapOperation12, defaultOperation1, mapOperation11, defaultOperation2, mapOperation23));
    	}
    }
    
    @RunWith(MockitoJUnitRunner.class)
    public class RequestValidationByFlagMapperTest {
    
    	@Mock
    	private List<RequestValidation> defaultValidations;
        
    	@Mock
    	private Map<String, List<RequestValidation>> mapOfValidations;
    
    	@InjectMocks
    	private RequestValidationByFlagMapper mapper;
    
    	@SuppressWarnings("unchecked")
        @Test
    	public void verifyParameters() throws NoSuchFieldException, SecurityException, IllegalArgumentException,
    	        IllegalAccessException {
    		Field defaultOperationsField = AbstractMapperByFlag.class.getDeclaredField("defaultOperations");
    		defaultOperationsField.setAccessible(true);
            List<RequestValidation> actualFilters = (List<RequestValidation>) defaultOperationsField.get(mapper);
    		assertThat(actualFilters, sameInstance(defaultValidations));
    
    		Field mapOfFiltersField = AbstractMapperByFlag.class.getDeclaredField("mapOfOperations");
    		mapOfFiltersField.setAccessible(true);
    		Map<String, List<RequestValidation>> actualMapOfFilters = (Map<String, List<RequestValidation>>) mapOfFiltersField.get(mapper);
    		assertThat(actualMapOfFilters, sameInstance(mapOfValidations));
    	}
    }
    

    To Do
    There are other classes that might be candidate for refactoring of some sort.
    RequestFlowValidation and RequestFilter are similar.
    And
    RequestValidationsEngineImpl and FiltersEngine

    To Do 2
    Create a Matcher for the reflection part.

    Code
    As always, all the code can be found at:

    A Tag for this post: all-components-in

    Conclusion
    The infrastructure is almost done.
    During this time we are also implementing actual classes for the flow (validations, filters, actions).
    These are not covered in the posts, nor in GitHub.
    The infrastructure will be wired to a service we have using Spring.
    This will be explained in future posts.

    Request Validation and Filtering by Flags – Filtering an Item

    On a previous post, I introduced a system requirement of validating and filtering a request by setting flags on it.

    Reference: Introduction

    In this post I want to show the filtering system.

    Here are general UML diagrams of the filtering components and sequence.

    Filtering UML Diagram

    General Components

    public interface Item {
            String getName();
    }
    
    public interface Request {
            Set getFlags();
            List getItems();
    }
    

    Filter Mechanism (as described in the UML above)

    public interface Filter extends Predicate {
    	String errorMessage();
    }
    

    FilterEngine is a cool part, which takes several Filters and apply to each the items. Below you can see the code of it. Above, the sequence diagram shows how it’s done.

    public class FiltersEngine {
    
    	public FiltersEngine() {
    	}
    
    	public ItemsFilterResponse applyFilters(List filters, List items) {
    		List validItems = Lists.newLinkedList(items);
    		List invalidItemInformations = Lists.newLinkedList();
    		for (Filter validator : filters) {
    			ItemsFilterResponse responseFromFilter = responseFromFilter(validItems, validator);
    			validItems = responseFromFilter.getValidItems();
    			invalidItemInformations.addAll(responseFromFilter.getInvalidItemsInformations());
    		}
    
    		return new ItemsFilterResponse(validItems, invalidItemInformations);
    	}
    
    	private ItemsFilterResponse responseFromFilter(List items, Filter filter) {
    		List validItems = Lists.newLinkedList();
    		List invalidItemInformations = Lists.newLinkedList();
    		for (Item item : items) {
    			if (filter.apply(item)) {
    				validItems.add(item);
    			} else {
    				invalidItemInformations.add(new InvalidItemInformation(item, filter.errorMessage()));
    			}
    		}
    		return new ItemsFilterResponse(validItems, invalidItemInformations);
    	}
    }
    

    And of course, we need to test it:

    @RunWith(MockitoJUnitRunner.class)
    public class FiltersEngineTest {
    	private final static String MESSAGE_FOR_FILTER_1 = "FILTER - 1 - ERROR";
    	private final static String MESSAGE_FOR_Filter_2 = "FILTER - 2 - ERROR";
    	@Mock(name = "filter 1")
    	private Filter singleFilter1;
    	@Mock(name = "filter 2")
    	private Filter singleFilter2;
    	@Mock(name = "item 1")
    	private Item item1;
    	@Mock(name = "item 2")
    	private Item item2;
    
    	@InjectMocks
    	private FiltersEngine filtersEngine;
    
    	@Before
    	public void setup() {
    		when(singleFilter1.errorMessage()).thenReturn(MESSAGE_FOR_FILTER_1);
    		when(singleFilter2.errorMessage()).thenReturn(MESSAGE_FOR_Filter_2);
    
    		when(item1.getName()).thenReturn("name1");
    
    		when(item2.getName()).thenReturn("name2");
    	}
    
    	@Test
    	public void verifyThatAllSingleFiltersAreCalledForValidItems() {
    		when(singleFilter1.apply(item1)).thenReturn(true);
    		when(singleFilter1.apply(item2)).thenReturn(true);
    		when(singleFilter2.apply(item1)).thenReturn(true);
    		when(singleFilter2.apply(item2)).thenReturn(true);
    
    		ItemsFilterResponse response = filtersEngine.applyFilters(Lists.newArrayList(singleFilter1, singleFilter2),
    				Lists.newArrayList(item1, item2));
    		assertThat("expected no invalid", response.getInvalidItemsInformations(),
    				emptyCollectionOf(InvalidItemInformation.class));
    		assertThat(response.getValidItems(), containsInAnyOrder(item1, item2));
    
    		verify(singleFilter1).apply(item1);
    		verify(singleFilter1).apply(item2);
    		verify(singleFilter2).apply(item1);
    		verify(singleFilter2).apply(item2);
    		verifyNoMoreInteractions(singleFilter1, singleFilter2);
    	}
    
    	@SuppressWarnings("unchecked")
    	@Test
    	public void itemsFailIndifferentFiltersShouldGetOnlyFailures() {
    		when(singleFilter1.apply(item1)).thenReturn(false);
    		when(singleFilter1.apply(item2)).thenReturn(true);
    		when(singleFilter2.apply(item2)).thenReturn(false);
    
    		ItemsFilterResponse response = filtersEngine.applyFilters(Lists.newArrayList(singleFilter1, singleFilter2),
    				Lists.newArrayList(item1, item2));
    		assertThat(
    				response.getInvalidItemsInformations(),
    				containsInAnyOrder(matchInvalidInformation(new InvalidItemInformation(item1, MESSAGE_FOR_FILTER_1)),
    						matchInvalidInformation(new InvalidItemInformation(item2, MESSAGE_FOR_Filter_2))));
    		assertThat(response.getValidItems(), emptyCollectionOf(Item.class));
    
    		verify(singleFilter1).apply(item1);
    		verify(singleFilter1).apply(item2);
    		verify(singleFilter1).errorMessage();
    		verify(singleFilter2).apply(item2);
    		verify(singleFilter2).errorMessage();
    		verifyNoMoreInteractions(singleFilter1, singleFilter2);
    	}
    
    	@Test
    	public void firstItemFailSecondItemSuccessShouldGetOneItemInEachList() {
    		when(singleFilter1.apply(item1)).thenReturn(true);
    		when(singleFilter1.apply(item2)).thenReturn(true);
    		when(singleFilter2.apply(item1)).thenReturn(false);
    		when(singleFilter2.apply(item2)).thenReturn(true);
    
    		ItemsFilterResponse response = filtersEngine.applyFilters(Lists.newArrayList(singleFilter1, singleFilter2),
    				Lists.newArrayList(item1, item2));
    		assertThat(response.getInvalidItemsInformations(), contains(matchInvalidInformation(new InvalidItemInformation(item1,
    				MESSAGE_FOR_Filter_2))));
    		assertThat(response.getValidItems(), containsInAnyOrder(item2));
    
    		verify(singleFilter1).apply(item1);
    		verify(singleFilter1).apply(item2);
    		verify(singleFilter2).apply(item1);
    		verify(singleFilter2).apply(item2);
    		verify(singleFilter2).errorMessage();
    		verifyNoMoreInteractions(singleFilter1, singleFilter2);
    	}
    
    	private static BaseMatcher matchInvalidInformation(InvalidItemInformation expected) {
    		return new InvalidItemInformationMatcher(expected);
    	}
    
    	private final static class InvalidItemInformationMatcher extends BaseMatcher {
    		private InvalidItemInformation expected;
    
    		private InvalidItemInformationMatcher(InvalidItemInformation expected) {
    			this.expected = expected;
    		}
    
    		public boolean matches(Object itemInformation) {
    			InvalidItemInformation actual = (InvalidItemInformation) itemInformation;
    			return actual.getName().equals(expected.getName())
    					&& actual.getErrorMessage().equals(expected.getErrorMessage());
    		}
    
    		public void describeTo(Description description) {
    		}
    	}
    }
    

    Some explanation about the test
    You can see that I don’t care about the implementation of Filter. Actually, I don’t even have any implementation of it.
    I also don’t have implementation of the Item nor the request.
    You can see an example of how to create a BaseMatcher to be used with assertThat(…)

    Coding
    Try to see whether it is ‘clean’. Can you understand the story of the code? Can you tell what the code does by reading it line by line?

    On the following post I will show how I applied the flag mapping to select the correct filters for a request.

    You can find all the code in: https://github.com/eyalgo/request-validation

    [Edit] Created tag Filtering_an_item before refactoring.

    Request Validation and Filtering by Flags – Introduction

    General

    We are working on a service that should accept some kind of request.

    The request has List of Items. In the response we need to tell the client whether the request is valid and also some information about each item: is it valid or not. If it’s valid, it will be persisted. If it’s not, it should be filtered out. So the response can have information of how many items are valid (and sent to be persisted) and list of information of the filtered out items.

    The request has another metadata in it. It has collection (set) of flags. The filtering and validation is based on the flags of the request. So basically one request may be validated and filtered differently than the other, based on the flags of each request.

    We might have general validations / filters that need to be applied to any request, whatever flags it has.

    Request Validation and Filtering High level design

    Design

    Flags Mapping

    We’ll hold a mapping of flag-to-filters, and flag-to-validation.

    Request

    Has flags and items.

    Components

    Filter, Filter-Engine, Flags-Mapper

    Development Approach

    Bottom Up

    We have a basic request already, as the service is up and running, but we don’t have any infrastructure for flags, flag-mapping, validation and filtering.

    We’ll work bottom up; create the mechanism for filtering, enhance the request and then wire it up using Spring.

    Coding

    I’ll try to show the code using tests, and the development using some kind of TDD approach.

    I am using eclipse’s EclEmma for coverage.

    General

    By looking at the code, you can see usage of JUnit, Mockito, Hamcrest, Google-Guava.

    You can also see small classes, and interface development approach.

    Source Code

    https://github.com/eyalgo/request-validation

    Bitbucket vs. GitHub my Conclusion

    When I first started blogging (not too long ago) I had to choose where to put the code I use as examples.
    I already had GitHub and Bitbucket accounts, so I just needed to decide.

    There are a lot of articles, blogs and question comparing the two options.
    Below you can find some of them (Did some Googling…).

    Initially I chose Bitbucket, but without a real particular reason.
    Perhaps one reasons was working with Atlassian product, which I like as a company.
    Another big advantage with Bitbucket is having the option of private repository.

    However, GitHub is more popular; dzone lets you give your GitHub username, and I guess a user (profile) is more “searchable” there.
    GitHub also has the gist feature, which is very helpful when writing code examples in a blog.

    So for now, I decided to use both solutions.
    Whenever I am working on a side project, which I don’t want to publicize, I will put it in Bitbucket as private repository.
    But public repositories I will put in GitHub.
    In the following days I will change the links of previous posts to direct to the GitHub location instead of Bitbucket.

    Moving a Repository

    If I am using two repositories hosts, I need to know how to move repositories from one location to another:
    https://coderwall.com/p/ufxjgg

    Bitbucket vs. GitHub Links

    Recommended Books

    I have a list of books, which I highly recommend.
    Each book taught me something different.

    It all begun years ago, when I went into interviewing process for my second work place.
    I was a junior Java developer, a coder. I didn’t have much experience and more importantly, I did not have a mentor or someone who would direct me. I learned on my own, after a CS Java course. Java 1.4 just came.

    One of my first interviewers was a great mentor. We met for an hour (probably). I don’t remember the company.  I don’t remember the job position. I don’t remember his name.
    But I DO remember a few things he asked me.
    He asked me if I know what TDD was. He asked me about XP.
    He also recommended a book: Effective Java by Joshua Bloch

    He didn’t even know what a great gift he gave me.

    So I went on and bought Effective Java, 1st edition. And TDD by Kent Beck.
    That was my first step towards being craftsman.

    Effective Java and Refactoring
    These two books look as they are not entirely related.
    However, both of these books thought me a-lot about design and patterns.
    I started to understand how to write code using patterns (Refactoring), and how to do it in Java (Effective).
    These books gave me the grounds for best practice in Java and Design Patterns and OOD.

    Test Driven Development
    I can’t say enough about this book.
    At first, I really didn’t understand what it was all about.
    But it was part of XP !! (which I didn’t understand as well).
    The TDD was left on the shelf until I was ready for it.

    Clean Code and The Pragmatic Programmer
    Should I say more?
    If you haven’t read both, stop everything and go to read.
    They are MUST for anyone who wants to be craftsman and takes his / her profession seriously.
    These books are also lots of fun to read. Especially the Pragmatic book.

    The Clean Coder
    If you want to take the next step of being a professional, read it.
    I was sometimes frustrated while reading it. I thought to myself how can pass all of this material to my teammates…

    Dependency Injection
    Somewhat not related, but as I see it, if you don’t use DI, you can’t write clean, testable code.
    If you can’t write clean, testable code, you are missing the point of craftsmanship.
    The book covers some injectors frameworks, but also describe what is it all about.

    Below is a table with the books I have mentioned.

    One last remark,
    This list does not contain the only books I read.
    During the years I have read more technical / professional books, but these made the most difference for me.

    Name Author(s) ISBN
    Effective Java Joshua Bloch 978-032-135-668-0
    Test-Driven Development Kent Beck 978-032-114-653-3
    Refactoring Martin Fowler 978-020-148-567-7
    Dependency Injection Dhanji R. Prasanna 978-193-398-855-9
    Clean Code Robert C. Martin 978-013-235-088-4
    The Clean Coder Robert C. Martin 978-013-708-107-3
    The Pragmatic Programmer Andrew Hunt , David Thomas 978-020-161-622-4

    Learn Ruby

    I decided to learn a new language, which is different than Java and thought that Ruby would be a good candidate.
    After searching the web for good tutorials and not just basic ‘Hello World’ I found this one:
    http://rubykoans.com/

    I liked the idea that the learning is step by step and it feels that there’s lot of thinking behind it.
    What I mostly liked was the TDD approach in the learning method.

    At the beginning the tests are simple and teach the basic syntax and semantics.
    Gradually the tests become more interesting and complex.
    Each test file starts with ‘about_’.

    The first interesting and more challenging test was the triangle. Not due to the “algorithm”, but the usage of another file. I tried to have separate class for validation. Just to make a habit.
    Then came the ‘calculate score’. The basic solution was simple, but it took me several iterations to make the code cleaner and get familiar with Ruby Hash usage.
    I wonder whether I did make it better and cleaner.

    Inheritance exercise was simple although I needed to grasp the meaning of: “inheritance and cross-methods”
    Modules: I didn’t really understand the modules usage in a class. What is it good for?
    So “about scope” helped me understand it.
    I really like the idea that the tests are built one on top of the other.

    ‘About Proxy’ is really interesting! It took me the longest time to solve.
    It is based on many previous assignments and was a real challenge. I had to recheck classes, symbols, arrays and many other tests.
    There’s an extra credit assignment, which I’ll do next.
    In the meantime, here’s the code in GitHub:

    https://github.com/eyalgo/ruby-koans-exercise

    Resources
    http://rubykoans.com/
    http://www.ruby-doc.org/
    https://github.com/bbatsov/ruby-style-guide

    Coding Exercise Introduction

    As part of my job, I do a-lot of architectural designing, OOD, clean code, TDD and everything that thrives to be craftsmanship work.

    However, I don’t get to have many problems such as tree traverse, BFS, DFS, lists etc.
    We can name these kind of problems as CS1 and CS2 courses problems.
    I also don‘t have the opportunity to learn new languages. We‘re writing in Java and there is no reason at the office to start learning a new language. At least not for business purpose.

    But, as a professional developer, I want to constantly exercise, sharpen and improve my skills.
    So I took upon myself a small project:

    1. Do some basic coding that I usually don’t do
    2. Learn a new language

    As for task #1, I already wrote some Java code to problems I thought of, and will try to add more during the weeks to come.

    As for task #2, I decides to start learning Ruby. Why Ruby? No particular reason. It’s different from Java and good in the market.
    Once I get comfortable with Ruby, my plan is to write the problems in Ruby.

    [EDIT]
    The code is in GitHub
    See Why at: Why GitHub
    [EDIT]

    Some of the code has nice written tests, and some, sadly to say, I just played around, it is not REALLY, AUTOMATICALLY testes. This is something that must be fixed as well.

    These are the problems I already written:

    • Factorial
    • Fibonacci
    • Reverse a list
    • Anagram
    • Palindrome
    • BFS tree traverse

    Code at GitHub: https://github.com/eyalgo/brainers-java

    Getting Started with Google Guava – Book Review

    I recently got my hands (my kindle) on the book: Getting Started with Google Guava by Bill Bejeck.

    I love reading technical books and always hope to learn new stuff. As an extensive user of the Guava library, I was really intrigued to see what I was missing from this library and how I could improve the usage of it.

    I will not go over it chapter by chapter with explanations, as anyone can check the TOC and see the details of what this book covers. Instead, I will try to give my own impression.

    The book covers all aspects of the Guava library. For each aspect, the author shows the most used implementation and mentions other ones.

    In nearly every chapter, I was introduced to some gems that immediately went into our own codebase when I started refactoring. That was FUN. And I saw code improvements instantly.

    I really enjoyed reading the code examples with the extensive usage of JUnit as showcases for the behavior of the various classes. It’s a great way of showing what the library does. And as a side effect, it shows developers how a test is used as the specs of the code.

    It seems that the author was very meticulous in writing clean and testable code. Two areas, which I think are, well, the most important for being a professional developer (a craftsman).

    I think that this book is great for both newbies and experienced Guava users.
    I think it is also great for developers who want to have some kind of knowledge on how to write clean and better code.