SRP as part of SOLID

Clean Code Alliance organized a meetup about SOLID principles.
I had the opportunity to talk about Single Responsibility Principle at part of SOLID.
It’s a presentation I gave several times in the past.

It was fun talking about it.
There were many interesting and challenging questions, which gave me lots of things to think of.

Title:
SRP as part of SOLID

Abstract:
Single Responsibility Principle (SRP), is the part of the SOLID acronym.The SOLID principles help us design better code. Applying those principles helps us having maintainable code, less bugs and easier testing.The SRP is the foundation of having better designed code.In this session I will introduce the SOLID principles and explain in more details what SRP is all about.Applying those principles is not sci-fi, it is real, and I will demonstrate it.
Yesterday I gave a talk in a meetup about the SRP in SOLID.

Bio:
Eyal Golan is a Senior Java developer and agile practitioner. Responsible of building the high throughput, low latency server infrastructure.Manages the continuous integration and deployment of the system. Leading the coding practices. Practicing TDD, clean code. In the path for software craftsmanship.

Following me, Hayim Makabee gave a really interesting talk about The SOLID Principles Illustrated by Design Patterns

Here are the slides.

And the video (in Hebrew)

Thanks for the organizers, Boris and Itzik and mostly for the audience who seemed very interested.

Linkedin Twitter facebook github

Advertisements

Working with Legacy Test Code

Legacy Code and Smell by Tests

Working with unit tests can help in many ways to improve the code-base.
One of the aspects, which I mostly like, is that tests can point us to code smell in the production code.
For example, if a test needs large setup or assert many outputs, it can point that the unit under test doesn’t follow good design, such as SRP and other OOD.

But sometimes the tests themselves are poorly structured or designed.
In this post I will give two examples for such cases, and show how I solved it.

Test Types

(or layers)
There are several types, or layers, of tests.

  • Unit Tests
    Unit test should be simple to describe and to understand.
    Those tests should run fast. They should test one thing. One unit (method?) of work.
  • Integration Tests
    Integration tests are more vague in definition.
    What kind of modules do they check?
    Integration of several modules together? Dependency-Injector wiring?
    Test using real DB?
  • Behavioral Tests
    Those tests will verify the features.
    They may be the interface between the PM / PO to the dev team.
  • End2End / Acceptance / Staging / Functional
    High level tests. May run on production or production-like environment.

Complexity of Tests

Basically, the “higher level” the test, the more complex it is.
Also, the ratio between possible number of tests and production code increase dramatically per test level.
Unit tests will grow linearly as the code grows.
But starting with integration tests and higher level ones, the options start to grow in exponential rate.
Simple calculation:
If two classes interact with each other, and each has 2 methods, how many option should we check if we want to cover all options? And imagine that those methods have some control flow like if.

Sporadically Failing Tests

There are many reasons for a test to be “problematic”.
One of the worst is a test that sometimes fails and usually passes.
The team ignores the CI’s mails. It creates noise in the system.
You can never be sure if there’s a bug or something was broken or it’s a false alarm.
Eventually we’ll disable the CI because “it doesn’t work and it’s not worth the time”.

Integration Test and False Alarm

Any type of test is subject for false alarms if we don’t follow basic rules.
The higher test level, there’s more chance for false alarms.
In integration tests, there’s higher chance for false alarms due to external resources issues:
No internet connection, no DB connection, random miss and many more.

Our Test Environment

Our system is “quasi legacy”.
It’s not exactly legacy because it has tests. Those test even have good coverage.
It is legacy because of the way it is (un)structured and the way the tests are built.
It used to be covered only by integration tests.
In the past few months we started implementing unit tests. Especially on new code and new features.

All of our integration tests inherit from BaseTest, which inherits Spring’s AbstractJUnit4SpringContextTests.
The test’s context wires everything. About 95% of the production code.
It takes time, but even worse, it connects to real external resources, such as MongoDB and services that connect to the internet.

In order to improve tests speed, a few weeks ago I change MongoDB to embedded. It improved the running time of tests by order of magnitude.

This type of setup makes testing much harder.
It’s very difficult to mock services. The environment is not isolated from the internet and DB and much more.

After this long introduction, I want to describe two problematic tests and the way I fixed them.
Their common failing attribute was that they sometimes failed and usually passed.
However, each failed for different reason.

Case Study 1 – Creating Internet Connection in the Constructor

The first example shows a test, which sometimes failed because of connection issues.
The tricky part was, that a service was created in the constructor.
That service got HttpClient, which was also created in the constructor.

Another issue, was, that I couldn’t modify the test to use mocks instead of Spring wiring.
Here’s the original constructor (modified for the example):

private HttpClient httpClient;
private MyServiceOne myServiceOne;
private MyServiceTwo myServiceTwo;

public ClassUnderTest(PoolingClientConnectionManager httpConnenctionManager, int connectionTimeout, int soTimeout) {
	HttpParams httpParams = new BasicHttpParams();
	HttpConnectionParams.setConnectionTimeout(httpParams, connectionTimeout);
	HttpConnectionParams.setSoTimeout(httpParams, soTimeout);
	HttpConnectionParams.setTcpNoDelay(httpParams, true);
	httpClient = new DefaultHttpClient(httpConnenctionManager, httpParams);

	myServiceOne = new MyServiceOne(httpClient);
	myServiceTwo = new MyServiceTwo();
}

The tested method used myServiceOne.
And the test sometimes failed because of connection problems in that service.
Another problem was that it wasn’t always deterministic (the result from the web) and therefore failed.

The way the code is written does not enable us to mock the services.

In the test code, the class under test was injected using @Autowired annotation.

The Solution – Extract and Override Call

Idea was taken from Working Effectively with Legacy Code.

  1. Identifying what I need to fix.
    In order to make the test deterministic and without real connection to the internet, I need access for the services creation.
  2. I will introduce a protected methods that create those services.
    Instead of creating the services in the constructor, I will call those methods.
  3. In the test environment, I will create a class that extends the class under test.
    This class will override those methods and will return fake (mocked) services.

Solution’s Code

public ClassUnderTest(PoolingClientConnectionManager httpConnenctionManager, int connectionTimeout, int soTimeout) {
	HttpParams httpParams = new BasicHttpParams();
	HttpConnectionParams.setConnectionTimeout(httpParams, connectionTimeout);
	HttpConnectionParams.setSoTimeout(httpParams, soTimeout);
	HttpConnectionParams.setTcpNoDelay(httpParams, true);
	
	this.httpClient = createHttpClient(httpConnenctionManager, httpParams);
	this.myserviceOne = createMyServiceOne(httpClient);
	this.myserviceTwo = createMyServiceTwo();
}

protected HttpClient createHttpClient(PoolingClientConnectionManager httpConnenctionManager, HttpParams httpParams) {
	return new DefaultHttpClient(httpConnenctionManager, httpParams);
}

protected MyServiceOne createMyServiceOne(HttpClient httpClient) {
	return new MyServiceOne(httpClient);
}

protected MyServiceTwo createMyServiceTwo() {
	return new MyServiceTwo();
}
private MyServiceOne mockMyServiceOne = mock(MyServiceOne.class);
private MyServiceTwo mockMyServiceTwo = mock(MyServiceTwo.class);
private HttpClient mockHttpClient = mock(HttpClient.class);

private class ClassUnderTestForTesting extends ClassUnderTest {

	private ClassUnderTestForTesting(int connectionTimeout, int soTimeout) {
		super(null, connectionTimeout, soTimeout);
	}
	
	@Override
	protected HttpClient createHttpClient(PoolingClientConnectionManager httpConnenctionManager, HttpParams httpParams) {
		return mockHttpClient;
	}

	@Override
	protected MyServiceOne createMyServiceOne(HttpClient httpClient) {
		return mockMyServiceOne;
	}

	@Override
	protected MyServiceTwo createMyServiceTwo() {
		return mockMyServiceTwo;
	}
}

Now instead of wiring the class under test, I created it in the @Before method.
It accepts other services (not described here). I got those services using @Autowire.

Another note: before creating the special class-for-test, I ran all integration tests of this class in order to verify that the refactoring didn’t break anything.
I also restarted the server locally and verified everything works.
It’s important to do those verification when working with legacy code.

Case Study 2 – Statistical Tests for Random Input

The second example describes a test that failed due to random results and statistical assertion.

The code did a randomize selection between objects with similar attributes (I am simplifying here the scenario).
The Random object was created in the class’s constructor.

Simplified Example:

private Random random;

public ClassUnderTest() {
	random = new Random();
	// more stuff
}

//The method is package protected so we can test it
MyPojo select(List<MyPojo> pojos) {
	// do something
	int randomSelection = random.nextInt(pojos.size());
	// do something
	return pojos.get(randomSelection);
}

The original test did a statistical analysis.
I’ll just explain it, as it is too complicated and verbose to write it.
It had a loop of 10K iterations. Each iteration called the method under test.
It had a Map that counted the number of occurrences (returned result) per MyPojo.
Then it checked whether each MyPojo was selected at (10K / Number-Of-MyPojo) with some kind of deviation, 0.1.
Example:
Say we have 4 MyPojo instances in the list.
Then the assertion verified that each instance was selected between 2400 and 2600 times (10K / 4) with deviation of 10%.

You can expect of course that sometimes the test failed. Increasing the deviation will only reduce the number of false fail tests.

The Solution – Overload a Method

  1. Overload the method under test.
    In the overloaded method, add a parameter, which is the same as the global field.
  2. Move the code from the original method to the new one.
    Make sure you use the parameter of the method and not the class’s field. Different names can help here.
  3. Tests the newly created method with mock.

Solution Code

private Random random;

// Nothing changed in the constructor
public ClassUnderTest() {
	random = new Random();
	// more stuff
}

// Overloaded method
private select(List<MyPojo> pojos) {
	return select(pojos, this.random);
}

//The method is package protected so we can test it
MyPojo select(List<MyPojo> pojos, Random inRandom) {
	// do something
	int randomSelection = inRandom.nextInt(pojos.size());
	// do something
	return pojos.get(randomSelection);
}

Conclusion

Working with legacy code can be challenging and fun.
Working with legacy test code can be fun as well.
It feels really good to stop receiving annoying mails of failing tests.
It also increase the trust of the team on the CI process.

Linkedin Twitter facebook github

Working With Legacy Code, What does it Really Mean

At the end of January I am going to talk in Agile Practitioners 2015 TLV.
I’ll be talking about Legacy Code and how to approach it.

As the convention’s name implies, we’re talking practical stuff.

So what is practical in working with legacy code?
Is it how to extract a method? Or maybe it’s how to introduce setter for a static singleton?
Break dependency?
There are so many actions to make while working on legacy code.

But I want to stop for a minute and think.
What does it mean to work on legacy code?
How do we want the code to be after the changes?
Why do we need to change it? Do we really need to change it?

Definition
Let’s start with the definition of Legacy Code.
If you search the web you will see definitions such as “…Legacy code refers to an application system source code type that is no longer supported…” (from: techopedia)

People may think that legacy code is old, patched.

The definitions above are correct (old, patched, un-maintained, etc.), but I think that the definition coined by Michael Feathers (Working Effectively with Legacy Code) is better.
He defined legacy code as

Code Without Tests

I like to add that legacy code is usually Code that cannot be tested.
So basically, if 10 minutes ago, I wrote code which is not tested, and not testable, then it’s already Legacy Code.

Questioning the Code
When approaching code (any code), I think we should ask ourselves the following questions constantly.

  • What’s wrong with this code?
  • How do we want the code to be?
  • How can I test this piece of code?
  • What should I test?
  • Am I afraid to change this part of code?

Why Testable Code?
Why do we want to test our code?

Tests are the harness of the code.
It’s the safety net.

Imagine a circus show with trapeze. There’s a safety net below (or mattress)
The athletes can perform, knowing that nothing harmful will happen if they fall (well, maybe their pride).

Recently I went to an indie circus show.
The band was playing and a girl came to do some tricks on a high rope.
But before she even started, she fixed a mattress below.

And this is what working with legacy code is all about:
Put a mattress before you start doing tricks…
Or, in our words, add tests before you work / change the legacy code.

Think about it, the list of questions above can be answered (or thought of) just by understanding that we need to write tests to our code.
Once you put your safety net, your’re not afraid to jump.
⇒ once you write tests, you can add feature, fix bug, refactor.

Conclusion
In this post I summarized what does it mean to work with legacy code.
It’s simple:
working with legacy code, is knowing how to write tests to untested code.

The crucial thing is, understanding that we need to do that. Understanding that we need to invest the time to write those tests.
I think that this is as important as knowing the techniques themselves.

In following post(s) I will give some techniques examples.

A girl is doing trapeze with a mattress below

A girl is doing trapeze with a mattress below

Linkedin Twitter facebook github

It’s All About Tests – Part 3

In the previous two posts I discussed mostly about the philosophy and attitude of developing with testing.
In this post I give some tips and tools examples for testing.

Tools

JUnit
http://junit.org/
There’s also TestNG, which is great tool. But I have much more experience with JUnit so I will describe this framework.
1. Use the latest version.
2. Know your testing tool!

  • @RunWith
    This is class annotation. It tells JUnit to run with different Runner (mockito and Spring runners are the most common runners I use)

    import org.mockito.runners.MockitoJUnitRunner;
    ...
    @RunWith(MockitoJUnitRunner.class)
    public class MyClassTest {
      ...
    }
    
    @RunWith(SpringJUnit4ClassRunner.class)
    @ContextConfiguration(locations = { "/META-INF/app-context.xml","classpath:anotherContext.xml" })
    public class MyClassTest {
      ...
    }
    // You can inherit AbstractJUnit4SpringContextTests instead of using runner
    
  • @Rule
    kind of AOP.
    The most common out-of-the-box rule, is the TemporaryFolder Rule. It lets you use the file system without worrying about opening and closing files.
    An example of Rules can be found here.
  • Parameterized runner
    Really cool tool. It lets you run the same test with different input and different expected output.
    It might be abused and make a atest unreadable.
  • Test Data Preparation and Maintenance Tips

hamcrest
http://hamcrest.org/JavaHamcrest/
This library is “extension” of JUnit.
I can’t work without it 🙂
Hamcrest library gives us out-of-the-box matchers.
Matchers are used with the assertThat(...,Matcher) flavor.
I almost always use this flavor.
(In the previous post, someone suggested that I shouldn’t use assertTrue(…), but instead use assertThat.)

There are plenty type of matchers:
You can verify existing objects in collection ignoring order.
You can check greater than.
The test is more readable using the assertThat + matcher.

assertThat(mapAsCache.containsKey(new CacheKey("valA", "valB")), is(true));
assertThat(cachPairs.size(), is(2));
assertThat(enity.getSomething(), nullValue(Double.class));
assertThat(event.getType(), equalTo(Type.SHOWN));
assertThat(bits, containsInAnyOrder(longsFromUsIndexOne, longsFromUsIndexZero));

You can create your own Matcher. It’s very easy.
Here’s an example of matchers that verify Regular Expressions. https://github.com/eyalgo/junit-additions

mockito
https://code.google.com/p/mockito/
This is the second library I can’t work without.
It lets you mock dependencies of the class under test.

Using mockito you mock dependency.
Then you “tell” the mock object how to behave in certain inputs.
You tell it what to return if some input entered.
You can verify input arguments to a called method.
You can verify that a certain method was called (once, never, 3 times, etc.)
You can check the order of method / mocks calls.

Check this out:

Other Mocking Tools

  • PowerMock and EasyMock
    These two are very useful when working with legacy code.
    They allow you to test private methods, static methods and more things that you normally can’t.
    I think that if you need them, then something is wrong with the design.
    However, sometimes you use external libraries with singletons and/or static methods.
    Sometimes you work on legacy code, which is not well suited for testing.
    On these types of scenarios, then those mocking libraries can help
    https://code.google.com/p/powermock/
    http://easymock.org/
  • JMockit http://jmockit.github.io/
  • jMock http://jmock.org/

JBehave
http://jbehave.org/
JUnit, mockito, hamcrest are used for unit tests.
JBehave is not exactly the same.
It is a tool for Behavior-Driven-Development (BDD)
You write stories which are backed up by code (Java) and then you run them.

JBehave can be used for higher level tests, like functional tests.
Using JBehave, it’s easier to test a flow in the system.
It follows the Given, When, Then sequence.

If you take it to the next step, it can be a great tool for communication.
The product owner can write the scenarios, and if all is green, by the end of the iteration, then we passed the definition of done.

cucumber is another BDD tool.

Dependency Injection
In order to have testable code, among other things, you need to practice DI (dependency injection).
The reason is simple:
If you instantiate a dependency in a constructor (or method) of a class under test, then how can you mock it?
If you can’t mock the dependency, then you are bound to it. And you can’t simulate different cases.

Many application have Spring as the DI container, but less developers take the advantage of using the injection for testing.

Metrics
Use SONAR in your CI environment.
Check code coverage using cobertura or other tools.
Use Jenkins / Hudson / Other CI tool for automation.

IDE
Your IDE can help you writing tests.
For eclipse, I have two recommendations:

  1. MoreUnit is cool plugin that helps writing tests faster.
  2. In eclipse, CTRL+Space can give you hints and fill imports. But not static imports.
    Most (all?) libraries use static imports.
    So you can add the testing libraries as favorites and then eclipse will fill them for you.
  3. eclipse favorites

    eclipse favorites

POM
Here’s part of POM for testing libraries.

You can use profiles to separate unit testing with integration tests.

Linkedin Twitter facebook github

It’s All About Tests – Part 2

This is the second post of the series about testing.
In the first part I explained about the mindset we need to have while developing with tests. Or, in better words, developing for testable code.
In this part I will cover some techniques for testing approach.
The techniques I will describe can be seen as how to transform the mindset into actions.

Techniques

Types Of Tests
Types of tests are layers of what we test.

The most obvious one is the unit test.
Using JUnit (or TestNG, or any other tool), you will test the behavior of your code.
Each test should check one behavior of the class/method under test.

Another layer of tests, which usually done by developers, is what I like to call integration tests.
This type of test will usually be part of the code (under the test directory).

Integration tests may test several classes together.
They may test partial flow.

I like to test Spring wiring, verifying that the context file is correct. For example, if I have injected list of beans and the order is important.
Testing the wiring can be considered as integration test.
Another example would be checking the integration of a DAO class and the class that uses it. Sometimes there are “surprises” in these parts.

As a higher degree of tests, you will want to test request and response (REST).
If you have GUI, make an automated test suit for that as well.

Automation
Automate your full development cycle.
Use CI service, such as Hudson/Jenkins
Add your JUnit, selenium, JMeter, JBehave to your CI environment.

I suggest the following:
1. CI that checks the SCM for changes and runs whenever there is a change.
2. Nightly (or every few hours). A slower automation test suit that check more stuff, like integration tests.
The nightly can be slower.
If you do continuous deployment, then your setup may be different.

Environment
Have dedicated environment for testing.
DB that can be cleared and refilled.
If you work on REST service, have a server just for your test and automation environment.
If you can, try making it as similar as possible to production environment.

Stub, Mock
There are frameworks for stubbing and mocking.
But first understand what it means.
There’s a slight difference between stubbing and mocking.
Basically they both fake a real object (or interface).
You can tell the fake object to behave as you want in certain input.
You could also verify that it was called with expected parameters.
(more about it in next post)

Usage of External Resources
You can fake DB, or you can use some kind of embedded database.
Embedded database helps you isolate tests that include DB.
Same thing for external services.

Descriptive Tests

  • Add the message parameter.
    assertTrue("Cache pairs is not size 2", cachPairs.size() == 2);
    

    It has at least two benefits:
    1. The test is more readable
    2. When it fails, the message is clearer

    How many times you couldn’t tell what went wrong because there was no message? The failing test was assertTrue(something), Without the message parameter.

  • Name you tests descriptively.
    Don’t be afraid to have test-methods with (very) long name.
    It really helps when the test fails.
    Don’t name a test something like: public void testFlow(){...}
    It doesn’t mean anything.
  • Have naming convention.
    I like to name my tests: public void whenSomeInput_ThenSomeOutput() {...}
    But whatever you like to name your tests, try to follow some convention for all tests.

Test Structure
Try to follow the:
Given, When, Then sequence.
Given is the part where you create the test environment (create embedded DB, set certain values etc.)
It is also the part where you tell your mocks (more about it next post) how to behave.
When is the part where you run the tested code.
Then is where you check the result using assertions.
It’s the part where you verify that methods were called. Or not.

If it’s hard to keep an orderly structure, then consider it as test-smell (see previous post).

Unit Tests Should Run Fast
A unit test of class should run 1-5 seconds. Not more.
You want the quickest feedback whether something failed.
You will also want to run the unit tests as many times as possible.
If a test for one class takes around 30-60 seconds, then usually we won’t run it.

Running a full test suit on all your project should not take more than a few minutes (more than 5 is too much).

Coverage
Tests should coverage all your production code.
Coverage helps spot code which is not tested.
If it’s hard to cover some code, for instance due to many code branches (if-else), then again, you have test smell.
If you practice TDD, then you automatically have very high coverage.

Important: Do not make code coverage as the goal.
Code coverage is a tool. Use it.

TDD
Allow me not to add anything here…

Conclusion
In this post I gave some more ways, more concrete, on how to approach development with tests.
In the following post I will give some pointers and tips on how to work with the available tools.

Linkedin Twitter facebook github

It’s All About Tests – Part 1

This post is the first of a series of three.
1. Mindset of testing
2. Techniques
3. Tools and Tips

The Mindset

Testing code is something that needs to be learned. It takes time to absorb how to do it well.
It’s a craft that one should always practice and improve.

Back in the old days, developers did not test, they checked their code.
Here’s a nice twit about it:

Today we have many tools and techniques to work with.
XUnit frameworks, mock frameworks, UI automation, TDD, XP…

But I believe that testing starts with the mind. State of mind.

Why Testing
Should I really answer that?
Tests are your code harness and security for quality.
Tests tell the story of your code. They prove that something works.
They give immediate feedback if something went wrong.
Working with tests correctly makes you more efficient and effective.
You debug less and probably have less bugs, therefore you have more time to do actual work.
Your design will be better (more about it later) and maintainable.
You feel confident changing your code (refactor). More about it later.
It reduces stress, as you are more confident with your code.

What to Test
I say everything.
Perhaps you will skip the lowest parts of your system. The parts that reads/writes to the file system or the DB or communicate some external service.
But even these parts can be tested. And they should.
In following blogs I will describe some techniques how to do that.

Test even the smallest thing. For example, if you have a DTO and you decide that a certain field will be initialized with some value, then make a test that only instantiate this class and then verify (assert) the expected value.
(and yes, I know, some parts really cannot be tested. but they should remain minimal)

SRP
Single Responsibility Principle
This is how I like to refer to the point that a test needs to check one thing.
If it’s a unit test, then it should test one behavior of your method / class.
Different behavior should be tested in a different test.
If it’s a higher level of test (integration, functional, UI), then the same principle applies.
Test one flow of the system.
Test a click.
Test adding elements to DB correctly, but not deleting in the same test.

Isolation
Isolated test helps us understand exactly what went wrong.
Developing isolated test helps us concentrate on one problem at a time.

One aspect of isolation is related to the SRP. When you test something, isolate the tested code from other part (dependencies).
That way you test only that part of the code.
If the test fails, you know were it was.
If you have many dependencies in the test, it is much harder to understand what the actual cause of failure was.

But isolation means other things as well.
It means that no test would interfere another.
It means that the running order of the tests doesn’t matter.
For a unit test, it means that you don’t need a DB running (or internet connection for that matter).
It means that you can run your tests concurrently without one interfere the other (maven allows exactly this).
If you can’t do it (example: DB issues), then your tests are not isolated.

Test Smells
When the test is too hard to understand / maintain, don’t get mad on it 🙂
Say

thank you very much, my dear test, for helping me improve the code

If it is too complicated to setup environment for the test, then probably the unit being tested has too many dependencies.

If after running a method under test, you need to verify many aspects (verify, assert, etc.), the method probably does too much.
The test can be your best friend for code improvement

Usually a really complicated test code means less structured production code.
I usually see correlation between complicated test and code that doesn’t follow the SRP, or any other DOLID principles.

Testable Code
This is one of my favorites.
Whenever I do code review I ask the other person: “How are you going to test it?”, “How do you know it works?”
Whenever I code, I ask myself the same question. “How can I test this piece of code?”

In my experience, thinking always on how to create testable code, yields much better design.
The code “magically” has more patterns, less duplication, better OOD and behaves SOLIDly.

Forcing yourself to constantly test your code, makes you think.
It helps divide big, complicated problem into many (or few) smaller, more trivial ones.

If your code is testable and tested, you have more confident on it.
Confident on the behavior and confident to change it. Refactor it.

Refactoring
This item can be part of the why.
It can be also part of the techniques.
But I decided to give it special attention.
Refactoring is part of the TDD cycle (but not only).
When you have tests, you can be confident doing refactoring.
I think that you need to “think about refactoring” while developing. Similar to “think how to produce testable code”.
When thinking refactoring, testing comes along.

Refactoring is also state of mind. Ask yourself: “Is the code I produced clean enough? Can I improve it?”
(BTW, know when to stop…)

This was the first post of a series of posts about testing.
The following post will be about some techniques and approaches for testing.

Linkedin Twitter facebook github

Ease at Work – A Talk by Kent Beck

The other day I was watching Kent Beck’s talk about Ease at Work.
It touched me deeply and I could really relate to what he described.
http://www.infoq.com/presentations/self-image

In order for me to remember, I decided to summarize the key points of the presentation.

The concept of Ease
There are many meanings to the word Ease, but in this case it’s not something like:
“I want to be rich and sit all day on the beach”.
Usually, us programmers would like to pick up hard and challenging work.

Here are the three points that relate to Ease in our case.

  1. State of comfort
  2. Freedom from worry, pain, or agitation
  3. Readiness in performance, facility

The pendulum inside us
People tend to have emotions, feelings and state of mind that change like a pendulum.
We sometimes feel at the top of the world. “I am the best programmer in the world and can solve anything”. This is one side.
Other times we feel deeply at the bottom: “I am soooo bad on what I do. I should stop programming and work on something completely different”.

There’s a huge range between the two sides of the pendulum.

So, being at ease would mean bring the amplitude of the swing down.
Decrease the range.

Once the pendulum swings at lower range, I will feel better on what I’m doing.

List of things that help being at ease
In his talk, Kent explains each point. I will just list the points.

  1. My work matters
  2. My code works
  3. I’m proud of my work
  4. Making public commitments
  5. Accountability (I am accountable)
  6. I interpret feedback
  7. I am a beginner – I don’t know everything
  8. Meditation
  9. I serve. No expectations of reward

These points are reminder for me. Whenever I don’t feel at ease, I should check what’s wrong. These points are a good starting point.

My personal addition
I thought of another two points:

  1. Clear vision of the future
  2. A good developer (well, any employee) wants to know that he/she has a career path.
    I will be more at ease if “I know where I’m going from here”

  3. Professional and flourishing environment
  4. A good developer will feel much more comfortable (at ease) in an environment of people he/she can learn from.
    The team should be diverse enough so everyone can learn from everyone else and mentor others.
    A good developer will feel at ease if he/she is surrounded with members who have the same passion as him (for technology, clean-code or whatever)

Linkedin Twitter facebook github