Fedora Installation

Aggregate Installation Tips

One of the reasons I am writing this blog, is to keep “log” for myself on how I resolved issues.

In this post I will describe how I installed several basic development tools on a Fedora OS.
I want this laptop to be my workstation for out-of-work projects.

Almost everything in this post can be found elsewhere in the web.
Actually, most of what I am writing here is from other links.

However, this post is intended to aggregate several installations together.

If you’re new to Linux (or not an expert, as I am not), you can learn some basic stuff here.
How to install (yum), how to create from source code, how to setup environment variables and maybe other stuff.

First, we’ll start with how I installed Fedora.

Installing Fedora

I downloaded Fedora ISO from https://getfedora.org/en/workstation/.
It is Gnome distribution.
I then used http://www.linuxliveusb.com/ to create a self bootable USB. It’s very easy to use.
I switched to KDE by running: sudo yum install @kde-desktop

Installing Java

Download the rpm package Oracle site.

# root
su -
# Install JDK in system
rpm -Uvh /path/.../jdk-8u40-linux-i586.rpm
# Use correct Java
alternatives --install /usr/bin/java java /usr/java/latest/jre/bin/java 2000000
alternatives --install /usr/bin/javac javac /usr/java/latest/bin/javac 2000000
alternatives --install /usr/bin/javaws javaws /usr/java/latest/jre/bin/javaws 2000000
alternatives --install /usr/bin/jar jar /usr/java/latest/bin/jar 2000000
# Example how to swap javac
# alternatives --config javac
view raw install-jdk.sh hosted with ❤ by GitHub

Under /etc/profile.d/ , create a file (jdk_home.sh) with the following content:

# Put this file under /etc/profile.d
export JAVA_HOME=/usr/java/latest
export PATH=$PATH:JAVA_HOME/bin
view raw jdk_home.sh hosted with ❤ by GitHub

I used the following link, here’d how to install JDK
http://www.if-not-true-then-false.com/2014/install-oracle-java-8-on-fedora-centos-rhel/

Installing Intellij

Location: https://www.jetbrains.com/idea/download/

# root
su -
# Create IntelliJ location
mkdir -p /opt/idea
# Untar installation
tar -xvzf /path/.../ideaIC-14.1.tar.gz -C /opt/idea
# Create link for latest IntelliJ
ln -s /opt/idea/idea-IC-141.177.4/ /opt/idea/latest
chmod -R +r /opt/idea
view raw idea-install.sh hosted with ❤ by GitHub

Check https://www.jetbrains.com/idea/help/basics-and-installation.html

After installation, you can go to /opt/idea/latest/bin and run idea.sh
Once you run it, you will be prompt to create a desktop entry.
You can create a command line launcher later on as well.

Installing eclipse

Location: http://www.eclipse.org/downloads/

su -
# create eclipse location
mkdir /opt/eclipse
# Unzip it
tar -xvzf /path/.../eclipse-java-luna-SR2-linux-gtk.tar.gz -C /opt/eclipse
# create link
ln -s /opt/eclipse/eclipse/ /opt/eclipse/latest
# Permissions
hmod -R +r /opt/eclipse/

Create executable /usr/bin/eclipse
#!/bin/sh
# name it eclipse
# put it in /usr/bin
# chmod 755 /usr/bin/eclipse
export ECLIPSE_HOME="/opt/eclipse/latest"
$ECLIPSE_HOME/eclipse $*
view raw eclipse.sh hosted with ❤ by GitHub

Create Desktop Launcher
# create /usr/local/share/applications/eclipse.desktop
# Paste the following
[Desktop Entry]
Encoding=UTF-8
Name=Eclipse
Comment=Eclipse Luna 4.4.2
Exec=eclipse
Icon=/opt/eclipse/latest/icon.xpm
Terminal=false
Type=Application
Categories=Development;IDE;
StartupNotify=true
view raw eclipse.desktop hosted with ❤ by GitHub

See also http://www.if-not-true-then-false.com/2010/linux-install-eclipse-on-fedora-centos-red-hat-rhel/

Installing Maven

Download https://maven.apache.org/download.cgi

# root
su -
# installation location
mkdir /opt/maven
# unzip
tar -zxvf /path/.../apache-maven-3.3.1-bin.tar.gz -C /opt/maven
# link
ln -s /opt/maven/apache-maven-3.3.1/ /opt/maven/latest

Setting maven environment
# put it in /etc/profile.d
export M2_HOME=/opt/maven/latest
export M2=$M2_HOME/bin
export PATH=$M2:$PATH
view raw maven-env.sh hosted with ❤ by GitHub

Installing git

I wanted to have the latest git client.
Using yum install did not make it, so I decided to install from source code.
I found a great blog explaining how to do it.
http://tecadmin.net/install-git-2-0-on-centos-rhel-fedora/
Note: in the compile part, he uses export to /etc/bashrc .
Don’t do it. Instead create a file under /etc/profile.d
Installation commands

su -
yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel
yum install gcc perl-ExtUtils-MakeMaker
yum remove git
# Download source
# check latest version in http://git-scm.com/downloads
cd /usr/src
wget https://www.kernel.org/pub/software/scm/git/git-<latest-version&gt;.tar.gz
tar xzf git-<latest-version>.tar.gz
# create git from source code
cd git-<latest-version>
make prefix=/opt/git all
make prefix=/opt/git install
view raw install-git.sh hosted with ❤ by GitHub

git Environment
Create an ‘sh’ file under /etc/profile.d
# save under /etc/profile.d/git-env.sh
export PATH=$PATH:/opt/git/bin
view raw git-env.sh hosted with ❤ by GitHub

Linkedin Twitter facebook github

Playing With Java Concurrency

Recently I needed to transform some filet that each has a list (array) of objects in JSON format to files that each has separated lines of the same data (objects).

It was a one time task and simple one.
I did the reading and writing using some feature of Java nio.
I used GSON in the simplest way.
One thread runs over the files, converts and writes.

The whole operation finished in a few seconds.

However, I wanted to play a little bit with concurrency.
So I enhanced the tool to work concurrently:

Threads
Runnable for reading file.
The reader threads are submitted to ExecutorService.
The output, which is a list of objects (User in the example), will be put in a BlockingQueue.

Runnable for writing file.
Each runnable will poll from the blocking queue.
It will write lines of data to a file.
I don’t add the writer Runnable to the ExecutorService, but instead just start a thread with it.
The runnable has a while(some boolen is true) {...} pattern.
More about that below…

Synchronizing Everything
BlockingQueue is the interface of both types of threads.

As the writer runnable runs in a while loop (consumer), I wanted to be able to make it stop so the tool will terminate.
So I used two objects for that:

Semaphore
The loop that reads the input files increments a counter.
Once I finished traversing the input files and submitted the writers, I initialized a semaphore in the main thread:
semaphore.acquire(numberOfFiles);

In each reader runable, I released the semaphore:
semaphore.release();

AtomicBoolean
The while loop of the writers uses an AtomicBoolean.
As long as AtomicBoolean==true, the writer will continue.

In the main thread, just after the acquire of the semaphore, I set the AtomicBoolean to false.
This enables the writer threads to terminate.

Using Java NIO
In order to scan, read and write the file system, I used some features of Java NIO.

Scanning: Files.newDirectoryStream(inputFilesDirectory, "*.json");
Deleting output directory before starting: Files.walkFileTree...
BufferedReader and BufferedWriter: Files.newBufferedReader(filePath); Files.newBufferedWriter(fileOutputPath, Charset.defaultCharset());

One note. In order to generate random files for this example, I used apache commons lang: RandomStringUtils.randomAlphabetic
All code in GitHub.

public class JsonArrayToJsonLines {
	private final static Path inputFilesDirectory = Paths.get("src\\main\\resources\\files");
	private final static Path outputDirectory = Paths
			.get("src\\main\\resources\\files\\output");
	private final static Gson gson = new Gson();
	
	private final BlockingQueue<EntitiesData> entitiesQueue = new LinkedBlockingQueue<>();
	
	private AtomicBoolean stillWorking = new AtomicBoolean(true);
	private Semaphore semaphore = new Semaphore(0);
	int numberOfFiles = 0;

	private JsonArrayToJsonLines() {
	}

	public static void main(String[] args) throws IOException, InterruptedException {
		new JsonArrayToJsonLines().process();
	}

	private void process() throws IOException, InterruptedException {
		deleteFilesInOutputDir();
		final ExecutorService executorService = createExecutorService();
		DirectoryStream<Path> directoryStream = Files.newDirectoryStream(inputFilesDirectory, "*.json");
		
		for (int i = 0; i < 2; i++) {
			new Thread(new JsonElementsFileWriter(stillWorking, semaphore, entitiesQueue)).start();
		}

		directoryStream.forEach(new Consumer<Path>() {
			@Override
			public void accept(Path filePath) {
				numberOfFiles++;
				executorService.submit(new OriginalFileReader(filePath, entitiesQueue));
			}
		});
		
		semaphore.acquire(numberOfFiles);
		stillWorking.set(false);
		shutDownExecutor(executorService);
	}

	private void deleteFilesInOutputDir() throws IOException {
		Files.walkFileTree(outputDirectory, new SimpleFileVisitor<Path>() {
			@Override
			public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
				Files.delete(file);
				return FileVisitResult.CONTINUE;
			}
		});
	}

	private ExecutorService createExecutorService() {
		int numberOfCpus = Runtime.getRuntime().availableProcessors();
		return Executors.newFixedThreadPool(numberOfCpus);
	}

	private void shutDownExecutor(final ExecutorService executorService) {
		executorService.shutdown();
		try {
			if (!executorService.awaitTermination(120, TimeUnit.SECONDS)) {
				executorService.shutdownNow();
			}

			if (!executorService.awaitTermination(120, TimeUnit.SECONDS)) {
			}
		} catch (InterruptedException ex) {
			executorService.shutdownNow();
			Thread.currentThread().interrupt();
		}
	}


	private static final class OriginalFileReader implements Runnable {
		private final Path filePath;
		private final BlockingQueue<EntitiesData> entitiesQueue;

		private OriginalFileReader(Path filePath, BlockingQueue<EntitiesData> entitiesQueue) {
			this.filePath = filePath;
			this.entitiesQueue = entitiesQueue;
		}

		@Override
		public void run() {
			Path fileName = filePath.getFileName();
			try {
				BufferedReader br = Files.newBufferedReader(filePath);
				User[] entities = gson.fromJson(br, User[].class);
				System.out.println("---> " + fileName);
				entitiesQueue.put(new EntitiesData(fileName.toString(), entities));
			} catch (IOException | InterruptedException e) {
				throw new RuntimeException(filePath.toString(), e);
			}
		}
	}

	private static final class JsonElementsFileWriter implements Runnable {
		private final BlockingQueue<EntitiesData> entitiesQueue;
		private final AtomicBoolean stillWorking;
		private final Semaphore semaphore;

		private JsonElementsFileWriter(AtomicBoolean stillWorking, Semaphore semaphore,
				BlockingQueue<EntitiesData> entitiesQueue) {
			this.stillWorking = stillWorking;
			this.semaphore = semaphore;
			this.entitiesQueue = entitiesQueue;
		}

		@Override
		public void run() {
			while (stillWorking.get()) {
				try {
					EntitiesData data = entitiesQueue.poll(100, TimeUnit.MILLISECONDS);
					if (data != null) {
						try {
							String fileOutput = outputDirectory.toString() + File.separator + data.fileName;
							Path fileOutputPath = Paths.get(fileOutput);
							BufferedWriter writer = Files.newBufferedWriter(fileOutputPath, Charset.defaultCharset());
							for (User user : data.entities) {
								writer.append(gson.toJson(user));
								writer.newLine();
							}
							writer.flush();
							System.out.println("=======================================>>>>> " + data.fileName);
						} catch (IOException e) {
							throw new RuntimeException(data.fileName, e);
						} finally {
							semaphore.release();
						}
					}
				} catch (InterruptedException e1) {
				}
			}
		}
	}

	private static final class EntitiesData {
		private final String fileName;
		private final User[] entities;

		private EntitiesData(String fileName, User[] entities) {
			this.fileName = fileName;
			this.entities = entities;
		}
	}
}

Linkedin Twitter facebook github

Using Groovy for Bash (shell) Operations

Recently I needed to create a groovy script that deletes some directories in a Linux machine.
Here’s why:
1.
We have a server for doing scheduled jobs.
Jobs such as ETL from one DB to another, File to DB etc.
The server activates clients, which are located in the machines we want to have action on them.
Most (almost all) of the jobs are written in groovy scripts.

2.
Part of our CI process is deploying a WAR into a dedicated server.
Then, we have a script that among other things uses soft-link to direct ‘webapps’ to the newly created directory.
This deployment happens once an hour, which fills up the dedicated server quickly.

So I needed to create a script that checks all directories in the correct location and deletes old ones.
I decided to keep the latest 4 directories.
It’s currently a magic number in the script. If I want / need I can make it as an input parameter. But I decided to start simple.

I decided to do it very simple:
1. List all directories with prefix webapp_ in a known location
2. Sort them by time, descending, and run delete on all starting index 4.

def numberOfDirectoriesToKeep = 4
def webappsDir = new File('/usr/local/tomcat/tomcat_aps')
def webDirectories = webappsDir.listFiles().grep(~/.*webapps_.*/)
def numberOfWeappsDirectories = webDirectories.size();

if (numberOfWeappsDirectories >= numberOfDirectoriesToKeep) {
  webDirectories.sort{it.lastModified() }.reverse()[numberOfDirectoriesToKeep..numberOfWeappsDirectories-1].each {
    logger.info("Deleteing ${it}");
    // here we'll delete the file. First try was doing a Java/groovy command of deleting directories
  }
} else {
  logger.info("Too few web directories")
}

It didn’t work.
Files were not deleted.
It happened that the agent runs as a different user than the one that runs tomcat.
The agent did not have permissions to remove the directories.

My solution was to run a shell command with sudo.

I found references at:
http://www.joergm.com/2010/09/executing-shell-commands-in-groovy/
and
http://groovy.codehaus.org/Executing+External+Processes+From+Groovy

To make a long story short, here’s the full script:

import org.slf4j.Logger
import com.my.ProcessingJobResult
def Logger logger = jobLogger
//ProcessingJobResult is proprietary
def ProcessingJobResult result = jobResult
try {
logger.info("Deleting old webapps from CI - START")
def numberOfDirectoriesToKeep = 4 // Can be externalized to input parameter
def webappsDir = new File('/usr/local/tomcat/tomcat_aps')
def webDirectories = webappsDir.listFiles().grep(~/.*webapps_.*/)
def numberOfWeappsDirectories = webDirectories.size();
if (numberOfWeappsDirectories >= numberOfDirectoriesToKeep) {
webDirectories.sort{it.lastModified() }.reverse()[numberOfDirectoriesToKeep..numberOfWeappsDirectories-1].each {
logger.info("Deleteing ${it}");
def deleteCommand = "sudo -u tomcat rm -rf " + it.toString();
deleteCommand.execute();
}
} else {
logger.info("Too few web directories")
}
result.status = Boolean.TRUE
result.resultDescription = "Deleting old webapps from CI ended"
logger.info("Deleting old webapps from CI - DONE")
} catch (Exception e) {
logger.error(e.message, e)
result.status = Boolean.FALSE
result.resultError = e.message
}
return result

BTW,
There’s a minor bug of indexes, which I decided not to fix (now), as we always have more directories.

Linkedin Twitter facebook github

JUnit Rules

Introduction
In this post I would like to show an example of how to use JUnit Rule to make testing easier.

Recently I inherited a rather complex system, which not everything is tested. And even the tested code is complex.
Mostly I see lack of test isolation.
(I will write a different blog about working with Legacy Code).

One of the test (and code) I am fixing actually tests several components together.
It also connect to the DB. It tests some logic and intersection between components.
When the code did not compile in a totally different location, the test could not run because it loaded all Spring context.
The structure was that before testing (any class) all Spring context was initiated.
The tests extend BaseTest, which loads all Spring context.

BaseTest also cleans the DB in the @After method.

Important note: This article is about changing tests, which are not structured entirely correct.
When creating new code and tests they should be isolated, testi one thing etc.
Better tests should use mock DB / dependencies etc.
After I fix the test and refactor, I’ll have confidence making more changes.

Back to our topic…
So, what I got is slow run of the test suit, no isolation and even problem running tests due to unrelated problems.

So I decided separating the context loading with DB connection and both of them from the cleaning up of the database.

Approach
In order to achieve that I did three things:
The first was to change inheritance of the test class.
It stopped inheriting BaseTest.
Instead it inherits AbstractJUnit4SpringContextTests
Now I can create my own context per test and not load everything.

Now I needed two rules, a @ClassRule and @Rule
@ClassRule will be responsible for DB connection
@Rule will cleanup the DB after / before each test

But first, what are JUnit Rules?
A short explanation would be that they provide a possibility to intercept test method, similar to AOP concept.
@Rule allows us to intercept method before and after the actual run of the method.
@ClassRule intercepts test class run.
A very known @Rule is JUnit’s TemporaryFolder.

(Similar to @Before, @After and @BeforeClass).

Creating @Rule
The easy part was to create a Rule that cleanup the DB before and after a test method.
You need to implement TestRule, which has one method: Statement apply(Statement base, Description description);
You can do a-lot with it.
I found out that usually I will have an inner class that extends Statement.
The rule I created did not create the DB connection, but got it in the constructor.

Here’s the full code:

public class DbCleanupRule implements TestRule {
private final DbConnectionManager connection;
public DbCleanupRule(DbConnectionManager connection) {
this.connection = connection;
}
@Override
public Statement apply(Statement base, Description description) {
return new DbCleanupStatement(base, connection);
}
private static final class DbCleanupStatement extends Statement {
private final Statement base;
private final DbConnectionManager connection;
private DbCleanupStatement(Statement base, DbConnectionManager connection) {
this.base = base;
this.connection = connection;
}
@Override
public void evaluate() throws Throwable {
try {
cleanDb();
base.evaluate();
} finally {
cleanDb();
}
}
private void cleanDb() {
connection.doTheCleanup();
}
}
}

Creating @ClassRule
ClassRule is actually also TestRule.
The only difference from Rule is how we use it in our test code.
I’ll show it below.

The challenge in creating this rule was that I wanted to use Spring context to get the correct connection.
Here’s the code:
(ExternalResource is TestRule)

public class DbConnectionRule extends ExternalResource {
private DbConnectionManager connection;
public DbConnectionRule() {
}
@Override
protected void before() throws Throwable {
ClassPathXmlApplicationContext ctx = null;
try {
ctx = new ClassPathXmlApplicationContext("/META-INF/my-db-connection-TEST-ctx.xml");
mongoDb = (DbConnectionManager) ctx.getBean("myDbConnection");
} finally {
if (ctx != null) {
ctx.close();
}
}
}
@Override
protected void after() {
}
public DbConnectionManager getDbConnecttion() {
return connection;
}
}

(Did you see that I could make DbCleanupRule inherit ExternalResource?)

Using it
The last part is how we use the rules.
A @Rule must be public field.
A @ClassRule must be public static field.

And there it is:

@ContextConfiguration(locations = { "/META-INF/one-dao-TEST-ctx.xml", "/META-INF/two-TEST-ctx.xml" })
public class ExampleDaoTest extends AbstractJUnit4SpringContextTests {
@ClassRule
public static DbCleanupRule connectionRule = new DbCleanupRule ();
@Rule
public DbCleanupRule dbCleanupRule = new DbCleanupRule(connectionRule.getDbConnecttion());
@Autowired
private ExampleDao classToTest;
@Test
public void foo() {
}
}

That’s all.
Hope it helps.

Eyal

[Edit]
I got some good remarks from Logan Mzz at DZone: http://java.dzone.com/articles/junit-rules#comment-125673

  1. Link to Junit Rules: https://github.com/junit-team/junit/wiki/Rules
  2. There’s ErrorCollector rule, which avoids annoying test-fail-fix cycles for a single test.
  3. And RuleChain, which described in the comment

Linkedin Twitter facebook github

Project Migration from Sourceforge to GitHub

I have an old project, named JVDrums, which was located at Sourceforge.
http://sourceforge.net/projects/jvdrums/

About JVDrums
It was written around 6 years ago (This is the date as shown in the commit history: 2008-05-09).

The project is a MIDI client for Roland Electronic Drums for uploading and backing up drumsets.
It was an early attempt to use testing during development (an early TDD attempt).

I used TestNG for the testing.

Initially I created it for my own model, which is Roland TD-12. I needed a small app for uploading drumsets which other users created and sent me.
When I published it in some forums I was asked to develop the client for other models (TD-6, TD-10).

That was cool, as I didn’t have the real module (each model has it’s own module), so how could I develop and test for it?

Each module has MIDI specification, so I downloaded them from Roland’s website.
Then, I created tests that simulated the structure of the MIDI file and I could hack the upload, download and editing.

I also created a basic UI interface using Java-Swing.

Migration
All i needed to do was following the instructions from:
https://github.com/nirvdrum/svn2git#readme

And here we go: https://github.com/eyalgo/jvdrums

So if you need to migrate from Sourceforge to GitHub just follow that link.