Java 9: Learning the new features – part 4

Standard

Hi, dear readers! Welcome to my blog. On this post, last on our series, we will finally talk about the most known new feature of Java 9, Jigsaw. But after all, why do we need a module system? Let’s find out!

In the beginning

At Java’s beginnings, we have several types of ways to encapsulate applications. There is the most generic unit, know as JAR, and there’s also other more specific formats, such as WARs for web applications and EARs for Enterprise Java Beans (EJB) applications.

This applications, typically speaking, do not consist of only code that was written by the developers teams themselves: there is also a plethora of libraries and frameworks that are also imported, such as logging libraries, ORM frameworks, web frameworks, etc.

Generally speaking, each of this libraries and frameworks are packaged as JARs as well, and their dependencies are also packaged as JARs. This results on a scenario that we have a really big amount of dependencies included on a single application, just to make the whole thing work. The picture bellow shows a typical Spring Boot application’s classpath. It is possible to note the overwhelming mountain of dependencies:

Screen Shot 2017-11-01 at 22.38.00

Fragment of a typical Spring Boot Application dependencies list. It is 267 items long!

Jar hell

The situation stated previously leads us to the infamous Jar hell. This term refers to all problems the developers suffer across more then 20 years of Java, such as ClassNotFoundExceptions, when the application can’t found a certain class, or NoClassDefFoundError, when there’s multiple versions of the same class and the application can’t decide which version to use.

Encapsulation problems

Another problem we got is encapsulation. Once a  dependency is formed, all the classes from the imported package are accessible to the importer. Even if we declare a class with the default visibility access, it is still possible to access the class, just by using the same package name of the class we want to use – don’t try this at home, folks!.

This leads to poor possibilities on interface designs, since we can’t really avoid certain classes to not been exposed to the outside world.

Performance degradation

Another big problem is performance. This is specially felt on Java EE containers, since servers need to support a big list of features provided for applications.  It is true that we had efforts on the past to improve this situation, such as EAP profiles on JBoss server, but still, the situation was far from resolved.

This results on heavy, clunky servers, that can be slow to operate and specially to initialize, alongside intensely memory demanding.

Enter the modules

To solve all the problems we saw on the previous sections, on Java 9 we got Jigsaw, the new module system for Java.

With jigsaw, we can create modules from packages inside a application, allowing a much more coherent and organized structure. Not only that, with modules, we have to explicit declare what we want to expose from a module, so we also eliminate the encapsulation problems we talked about earlier.

This also helps with the performance degradation we just saw, since with modules the amount of classes and packages to be loaded from the servers can be significantly reduced, resulting and thinner servers.

So,let’s see how modules work on practice!

Creating a module

Let’s start by creating a simple project. the source code for  this lab is on this link, the project was created using Intellij IDEA.

To create a project, all we have to do is create a java file called module-info.java and place it at the root of the package structure we want to encapsulate on a module. The result is something like the image bellow:

jigsaw1

Inside the file, we define a module, that it is something like this:

module com.alexandreesl.application {
}

Now, the keyword module is reserved on Java. On the code above we defined a module which name must match the package’s name. That’s it! Our first module! Now, let’s see how to make this module to talk with other modules

Separating a application in different modules

Our sample application will consist of 4 modules: a main module, a dao module, a service module and a model module.

To create the different modules, all we have to do is create the different packages and module definitions – the module-info.java files – , creating the whole module structure.

The image bellow shows the structure:

jigsaw2

And the new module definitions are:

module com.alexandreesl.dao {
}
module com.alexandreesl.model {
}
module com.alexandreesl.service {
}

Exposing a module

Now that we have the modules defined, let’s start coding our project. Our project will represent a simple CRUD of books, for a Bookstore system.

Let’s start by coding the Model module. We will create a Book class, to represent books from the system.

The code for the class is shown bellow:

package com.alexandreesl.model;

public class Book {

    private Long id;

    private String name;

    private String author;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getAuthor() {
        return author;
    }

    public void setAuthor(String author) {
        this.author = author;
    }
}

Then, we modify the module, to expose the model class:

module com.alexandreesl.model {

    exports com.alexandreesl.model;

}

Next, we code the DAO module. We will create  a interface and implementation for the module, separating each other by package segregation. We will also create a object factory.

This is the code for the interface, implementation and object factory of the dao module:

package com.alexandreesl.dao.interfaces;

import com.alexandreesl.model.Book;

public interface IBookDAO {

    void create(Book book);

    void update(Book book);

    Book find(Long id);


}
package com.alexandreesl.dao.impl;

import com.alexandreesl.dao.IBookDAO;
import com.alexandreesl.model.Book;

public class BookDAOImpl implements IBookDAO {
    @Override
    public void create(Book book) {

        System.out.println("INSERTED THE BOOK!");

    }

    @Override
    public void update(Book book) {

        System.out.println("UPDATED THE BOOK!");

    }

    @Override
    public Book find(Long id) {

        Book book = new Book();
        book.setId(id);
        book.setName("Elasticsearch: Consuming real-time data with ELK");
        book.setAuthor("Alexandre Eleutério Santos Lourenço");

        return book;
    }


}
package com.alexandreesl.dao.interfaces;

import com.alexandreesl.dao.impl.BookDAOImpl;

public class BookDAOFactory {

    public static IBookDAO getBookDAO() {

        return new BookDAOImpl();

    }

}

The image bellow shows the final structure of the model with the classes:

jigsaw3

To expose the model and also use the Book class from the Model module, we add the following lines to the module definition:

module com.alexandreesl.dao {

    requires com.alexandreesl.model;
    exports com.alexandreesl.dao.interfaces;

}

Here we can see a important advantage of modules: since we didn’t exported the impl package, the implementation won’t be exposed to code outside the module.

Now we code the service module. To simplify things up, we won’t create a interface-implementation approach this time, just a delegation class to the DAO layer. The code for the service class is shown bellow:

package com.alexandreesl.service;

import com.alexandreesl.dao.interfaces.BookDAOFactory;
import com.alexandreesl.dao.interfaces.IBookDAO;
import com.alexandreesl.model.Book;

public class BookService {

    private IBookDAO bookDAO;

    public BookService() {

        bookDAO = BookDAOFactory.getBookDAO();

    }

    public void create(Book book) {
        bookDAO.create(book);
    }

    public void update(Book book) {
        bookDAO.update(book);
    }

    public Book find(Long id) {
        return bookDAO.find(id);
    }


}

And the module changes are as follows:

module com.alexandreesl.service {

    requires com.alexandreesl.model;
    requires com.alexandreesl.dao;
    exports com.alexandreesl.service;

}

Finally, we code the main module, that it is simply a main method where we test it out our structure:

package com.alexandreesl.application;

import com.alexandreesl.model.Book;
import com.alexandreesl.service.BookService;

public class Main {

    public static void main(String[] args) {

        Book book = new Book();

        book.setAuthor("Stephen King");
        book.setId(1l);
        book.setName("IT - The thing");

        BookService service = new BookService();

        service.create(book);

        book.setName("IT");

        service.update(book);

        Book searchedBook = service.find(2l);

        System.out.println(searchedBook.getName());
        System.out.println(searchedBook.getAuthor());


    }

}

If we run our code, we will see that everything works, just as designed:

/Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home/bin/java "-javaagent:/Applications/IntelliJ IDEA CE.app/Contents/lib/idea_rt.jar=50683:/Applications/IntelliJ IDEA CE.app/Contents/bin" -Dfile.encoding=UTF-8 -p /Users/alexandrelourenco/Applications/git/JigsawLab9/out/production/application:/Users/alexandrelourenco/Applications/git/JigsawLab9/out/production/service:/Users/alexandrelourenco/Applications/git/JigsawLab9/out/production/dao:/Users/alexandrelourenco/Applications/git/JigsawLab9/out/production/model -m com.alexandreesl.application/com.alexandreesl.application.Main
INSERTED THE BOOK!
UPDATED THE BOOK!
Elasticsearch: Consuming real-time data with ELK
Alexandre Eleutério Santos Lourenço

Process finished with exit code 0

Please remember that, if the reader wants it, the code of this project is on Github, on this link.

Static dependencies

One thing that the reader may notice from our code, is that we needed to import the model module on each of the other modules of our system. This is because, as said before, no dependency required by a module is automatically inherited by another module on the hierarchy. All the requirements must be explicit declared to be linked.

However, on this case, if we wanted to declare the dependency on just one module and tell Java on the other modules that the dependency will be met later, we could use the static keyword. Static dependencies on Jigsaw are analogous to the provided scope on Maven, where a dependency is marked just for compilation requirements and it is assumed will be there when the code runs.

To make the changes so the model module is imported on just one module, we change all module definitions to the following:

module com.alexandreesl.application {

    requires com.alexandreesl.model;
    requires com.alexandreesl.service;

}
module com.alexandreesl.dao {

    requires static com.alexandreesl.model;
    exports com.alexandreesl.dao.interfaces;

}
module com.alexandreesl.service {

    requires static com.alexandreesl.model;
    requires com.alexandreesl.dao;
    exports com.alexandreesl.service;

}

If we run again our code, we will see that it runs successfully, just like before.

Package manager support

Since it is a new concept introduced just now, there is still some work undergoing on Java’s package management frameworks, such as Maven and Gradle. Keep it in mind that the objective of Jigsaw is not to replace package management systems.

Think of it more of a complement to this systems, with Jigsaw managing exposure and internal dependencies and the package systems managing issues such as packaging artifacts, running tests, etc.

If the reader is familiar with Gradle, there is some plugins already developed that integrates Jigsaw with it, like chainsaw:

https://github.com/zyxist/chainsaw

Conclusion

And so we conclude our Java 9 series. With several interesting new features, this new edition of Java proves not only that Java has still some relevance on the market, but also can still be evolved with the most modern practices on use. Thank you for following me on this post, until next time.

Advertisements

Java 9: Learning the new features – part 3

Standard

Hi, dear readers! Welcome to my blog. Continuing our series at the new features of Java 9, we will now talk about reactive streams, a new concept on parallel processing that promises to protect our applications from overflows of messages on processing. So, without further delay, let’s begin!

What is Reactive Streams

Let’s imagine a e-commerce that has to send some orders for a distribution center. The e-commerce and DC systems are apart from each other, been able to communicate by a REST service.

Normally, we could simply create a call from the e-commerce system to the DC. So, we implement the call and everything is fine.

However, some day we get a problem on our integration. We notice the e-commerce has overflowed the DC with lots of calls from a Black Friday’s sales, so the REST endpoint starts to fail and lose data.

This scenario illustrates a common integration problem: When a consumer has processing limitations to consume messages above a certain volume, we need to ensure the integration doesn’t end up overflowing the pipeline.

To tackle this problem, it was designed a pattern called Reactive Streams. With Reactive streams, the flow of processing is controlled by the Consumer, not the Publisher, that calls for more data to process as soon as it is ready, keeping his own pace. Not only that, we also have a feature called back pressure, that consists of a kind of throttling to ensure that the Publisher of the data will wait for the Consumer to be available before sending anymore messages and overflow the Consumer, just like we saw on our previous example.

The diagram bellow show us the new Flow API on Java 9, that allows us to implements Reactive Streams. We can see our consumer (the subscriber)  establishing a subscription with the producer and requesting n messages to consume, which are delivered to processing by the onNext method. Don’t worry about the rest of the details: we will see more on the next sections.

The reference site for this diagram, with another good tutorial on Reactive Streams, can be found on the references section at the end of the post.

Creating a Stream: the Publisher

First, let’s create our publisher. the simplest way to create a publisher is by using the SubmissionPublisher class.

Our lab will simulate the orders integration we talked about earlier. We will begin by creating a DTO to hold the data from our orders:

package com.alexandreesl.handson.model;

import java.math.BigDecimal;
import java.util.Date;
import java.util.List;

public class Order {

    private Long id;

    private List<String> products;

    private BigDecimal total;

    private Date orderDate;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public List<String> getProducts() {
        return products;
    }

    public void setProducts(List<String> products) {
        this.products = products;
    }

    public BigDecimal getTotal() {
        return total;
    }

    public void setTotal(BigDecimal total) {
        this.total = total;
    }

    public Date getOrderDate() {
        return orderDate;
    }

    public void setOrderDate(Date orderDate) {
        this.orderDate = orderDate;
    }
}

Next, let’s instantiate our publisher, passing the message object DTO as generic:

public static void main(String[] args) {

    SubmissionPublisher<Order> submissionPublisher = new SubmissionPublisher<>();


}

That’s all we need to do for now with our publisher.

Creating a Stream: the Consumer

Now, let’s create our consumer, or subscriber in other words. For this, we will create a class called CDOrderConsumer that implements the Subscriber<T> interface:

package com.alexandreesl.handson.consumer;

import com.alexandreesl.handson.model.Order;

import static java.util.concurrent.Flow.Subscriber;
import static java.util.concurrent.Flow.Subscription;


public class CDOrderConsumer implements Subscriber<Order> {

    private Subscription subscription;

    @Override
    public void onSubscribe(Subscription subscription) {

        this.subscription = subscription;
        subscription.request(1);

    }

    @Override
    public void onNext(Order item) {
        System.out.println("I am sending the Order to the CD!");
        subscription.request(1);

    }

    @Override
    public void onError(Throwable throwable) {

        throwable.printStackTrace();

    }

    @Override
    public void onComplete() {

        System.out.println("All the orders were processed!");

    }
}

On this class we can see several methods implemented, which we saw on the previous diagram. We can explain each of them as:

  • onSubscribe: On this method, we receive a instance of subscription, which we use to request messages to the publisher. On our example, we stored the instance and requested 1 message to be processed – it is possible to establish a limit of more then 1 message per call, allowing the subscriber to process batches of messages  – with the subscription.request(n) method call;
  • onNext(T): On this method, we make the processing of the messages received. On our example, we print a message symbolizing the REST call and ask the publisher for another message;
  •  onError(Throwable throwable): On this method, we receive errors that can occur on the message processing. On our example, we simply print the errors;
  • onComplete(): This method is called after all messages are processed. On our example, we just print a message to signal the completion. It is important to note that, if we didn’t make the onNext(T) method to ask for other messages, this method would be called after the first message, since no more messages would be asked from the publisher;

Now that we understand how to implement our Subscribe, let’s try it out our stream by subscribing with the publisher and sending a message:

public static void main(String[] args) throws IOException {


    SubmissionPublisher<Order> submissionPublisher = new SubmissionPublisher<>();
    submissionPublisher.subscribe(new CDOrderConsumer());

    Order order = new Order();
    order.setId(1l);
    order.setOrderDate(new Date());
    order.setTotal(BigDecimal.valueOf(123));
    order.setProducts(List.of("product1", "product2", "product3"));


    submissionPublisher.submit(order);

    submissionPublisher.close();

    System.out.println("Waiting for processing.......");
    System.in.read();


}

On our script, we instantiate a publisher, create a order and submit the message for processing. Keep in mind that the submit call doesn’t mean that the message was sent to the subscriber: this is only done when the subscriber calls subscription.request(n). Lastly, we close the publisher, as we will not send any more messages.

Note: You may be thinking about why we put it that System.in.read() at the end. this is because all processing of the stream is done on a separate thread from the main one, so we need to make the program wait for the processing to complete, or else it will exit before the message is processed.

If we execute our program, we will see a output like this:

/Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home/bin/java "-javaagent:/Applications/IntelliJ IDEA CE.app/Contents/lib/idea_rt.jar=50218:/Applications/IntelliJ IDEA CE.app/Contents/bin" -Dfile.encoding=UTF-8 -classpath /Users/alexandrelourenco/Applications/git/ReactiveStreamsJava9/out/production/ReactiveStreamsJava9Lab com.alexandreesl.handson.Main
Waiting for processing.......
I am sending the Order to the CD!
All the orders were processed!

Success!!! Now we have a fully functional reactive stream, allowing us to process our messages.

Processors on Reactive Streams

Sometimes, on a stream, there will be logic that can be placed between the publisher and the consumer, such as filtering, transforming, and more. For this purpose, we can implement processors. Processors are like subscribers that also publish messages after the logic is applied. This way, processors can be chained together on a stream, executing one after another, before finally passing the message to a subscriber.

Let’s expand our previous example. We detected a bug on our e-commerce that sometimes places “phantom” orders with 0 total value on the stream. We didn’t identified the cause yet, but it is necessary to prevent this fake orders from been sent to the CD system. We can use a processor to filter this fake orders.

So, let’s implement the following class, OrderFilter, to accomplish this:

package com.alexandreesl.handson.processor;

import com.alexandreesl.handson.model.Order;

import java.util.concurrent.Flow;
import java.util.concurrent.SubmissionPublisher;


public class OrderFilter extends SubmissionPublisher<Order> implements Flow.Processor<Order, Order> {

    private Flow.Subscription subscription;


    @Override
    public void onSubscribe(Flow.Subscription subscription) {

        this.subscription = subscription;
        subscription.request(1);

    }

    @Override
    public void onNext(Order item) {

        if (item.getTotal().doubleValue() > 0) {

            submit(item);

        } else {

            System.out.println("INVALID ORDER! DISCARDING...");

        }

        subscription.request(1);


    }

    @Override
    public void onError(Throwable throwable) {

        throwable.printStackTrace();

    }

    @Override
    public void onComplete() {

        System.out.println("All the orders were processed!");

    }


}

On this class, we implement both publisher and subscriber interfaces. The code is basically the same from our subscriber, except that on the onNext(T) method we implement a logic that checks if a order has a total value bigger then 0. If it has, it is submitted to the subscriber, otherwise, it is discarded.

Next, we modify our code, subscribing the processor on our stream and testing it out with 2 orders, one valid and one fake:

public static void main(String[] args) throws IOException {

    SubmissionPublisher<Order> submissionPublisher = new SubmissionPublisher<>();
    OrderFilter filter = new OrderFilter();
    submissionPublisher.subscribe(filter);
    filter.subscribe(new CDOrderConsumer());

    Order order = new Order();
    order.setId(1l);
    order.setOrderDate(new Date());
    order.setTotal(BigDecimal.valueOf(123));
    order.setProducts(List.of("product1", "product2", "product3"));

    submissionPublisher.submit(order);

    order = new Order();
    order.setId(2l);
    order.setOrderDate(new Date());
    order.setProducts(List.of("product1", "product2", "product3"));

    order.setTotal(BigDecimal.ZERO);

    submissionPublisher.submit(order);

    submissionPublisher.close();

    System.out.println("Waiting for processing.......");
    System.in.read();

}

If we run the code, we will a message indicating that one of the messages was discarded, reflecting that our implementation was a success:

/Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home/bin/java "-javaagent:/Applications/IntelliJ IDEA CE.app/Contents/lib/idea_rt.jar=51469:/Applications/IntelliJ IDEA CE.app/Contents/bin" -Dfile.encoding=UTF-8 -classpath /Users/alexandrelourenco/Applications/git/ReactiveStreamsJava9/out/production/ReactiveStreamsJava9Lab com.alexandreesl.handson.Main
Waiting for processing.......
INVALID ORDER! DISCARDING...
I am sending the Order to the CD!
All the orders were processed!

The source code for our lab can be found here.

Conclusion

And so we conclude our learning on Reactive Streams. With a simple and intuitive approach, Reactive Streams are a good solution to try it out, specially on solutions that have capacity limitations. Please follow me next time for our last chapter on this series, where we will finally see the so famed new module system, Jigsaw. Thank you for following me on this post, until next time.

References

Reactive Streams (Wikipedia)

Reactive Streams Tutorial (another good tutorial to serve as guide)

Java 9: Learning the new features – part 1

Standard

Hi, dear readers! Welcome to my blog. This is the first post from a series focused on studying the new features from Java 9.

After waiting so much time for some features like Jigsaw, the so-called Java module feature, Java 9 is finally upon us. Let’s begin our journey by exploring the new REPL console for the language, JShell!

Installing Java 9

To install Java 9, I recommend following the instructions on this link.

REPL

REPL is a acronym that stands for Read-Eval-Print-Loop. A REPL is a terminal where we can input commands and receive immediate feedback about the code we just entered.

The code is readed, his syntax is evaluated, then executed, the results are printed on the console and finally the terminal loops for the next command, hence concluding the execution, just like the acronym dictates.

Starting JShell

To start JShell, we just open a terminal and enter:

JShell

This will initialize the shell, as we can see bellow:

|  Welcome to JShell -- Version 9

|  For an introduction type: /help intro

jshell>

Just to finish our first glance at basic JShell commands, to exit the console, we just type:

jshell> /exit

|  Goodbye

Running commands

Now, let’s enter some commands. First, let’s create a String variable:

jshell> String myString = "Welcome to my JShell!"

myString ==> "Welcome to my JShell!"

jshell>

There’s two things we can notice on the code above: First, we don’t need to use a semicolon. Secondly, we can see the REPL in motion, as the code was processed and the results were printed on the console. If we just type the variable name, we can see that it will print his contents:

jshell> myString

myString ==> "Welcome to my JShell!"

jshell>

We can also use other types of commands as well, such as loops:

jshell> for (int i = 0;i < 10; i++)

   ...> System.out.println(i)

0

1

2

3

4

5

6

7

8

9

jshell>

It is also possible to make simple arithmetical operations. Let’s try a simple addition:

jshell> 1 + 2

$1 ==> 3

jshell>

Did you noticed we didn’t defined a variable? When we don’t include one, JShell do this for us, on this case, $1. This is defined by a $ followed by the command’s index, since JShell stores the commands of our session on a array-like structure.

We can see the command’s structure with the /list command, as follows:

jshell> /list

   1 : 1 + 2

   2 : String myString = "Welcome to my JShell!";

   3 : myString

   4 : for (int i = 0;i < 10; i++)

       System.out.println(i);

jshell>

Of course, variables implicit defined can also be used on other commands, as follows:

jshell> int i = $1 + 1

i ==> 4

jshell>

Editing scripts

JShell also allows us to edit and save scripts – snippets – of code, allowing us to create classes this way. Let’s see how to do it.

JShell comes with a editor, but it is also possible to change the editor for other of your choice. I will change my editor to Vim, using the following command:

jshell> /set editor vim

|  Editor set to: vim

jshell>

Now that our editor is changed, let’s begin by opening the command with the for loop on the editor – on my case, is the command at index 4:

jshell> /edit 4

This will open the snippet on Vim editor. Let’s edit the code as follows and save:

public class MyObject {

public static void myMethod() {

for (int i = 0;i < 10; i++)
System.out.println(i);

}

}

After saving, we will see a message indicating that the class was created:

jshell> /edit 4

|  created class MyObject

0

1

2

3

4

5

6

7

8

9

we can also discard the old code with the /drop command:

/drop 4

Now, let’s try to use our class on the shell:

jshell> MyObject.myMethod()

0

1

2

3

4

5

6

7

8

9

jshell>

As we can see, the code was correctly executed, proving that our class creation was a success.

Importing & Exporting

Importing and exporting is done by the /save and /open commands. Let’s run the following command:

/save <path-to-save>/backup.txt

The result will be a file like the following:

1 + 2
String myString = "Welcome to my JShell!";
myString
int i = $1 + 1;
public class MyObject {

public static void myMethod() {

for (int i = 0;i < 10; i++)
System.out.println(i);

}

}
MyObject.myMethod()

Now, let’s close the shell with the /exit command and open again, cleaning our session.

Now, let’s run the /open command to import our previous commands:

/open <path-to-save>/backup.txt

And finally, let’s run the /list command to see if the commands from our backup were imported:

jshell> /list

   1 : 1 + 2

   2 : String myString = "Welcome to my JShell!";

   3 : myString

   4 : int i = $1 + 1;

   5 : public class MyObject {

       

       public static void myMethod() {

       

       for (int i = 0;i < 10; i++)

       System.out.println(i);

       

       }

       

       }

   6 : MyObject.myMethod()

jshell>

We can see that our import was a success, successfully importing the commands from the session.

Other commands

Of course there are other commands alongside the ones showed on this post as well. A complete list of all the commands in JShell can be found on JShell’s documentation.

Conclusion

And so we conclude our first glimpse on the new features of Java 9. JShell is a interesting new addition to the Java language, allowing us to quickly test and run Java code. It is not a tool for production use, on my opinion, but it is a good tool for development and learning purposes. Thank you for following me on this post, until next time.

 

 

Apache Camel: integrating systems with Java

Standard

Hi, dear readers! Welcome to my blog. On this post, we will talk about Apache Camel, a robust solution for deploying system integrations across various technologies, such as REST, WS, JMS, JDBC, AWS Products, LDAP, SMTP, Elasticsearch etc.

So, let’s get start!

Origin

Apache Camel was created on Apache Service Mix. Apache Service Mix was a project powered by the Spring Framework and implemented following the JBI specification. The Java Business Integration specification specifies a plug and play platform for systems integrations, following the EIP (Enterprise Integration Patterns) patterns.

Terminology

Exchange

Exchanges – or MEPs(Message Exchange Patterns) – are like frames where we transport our data across the integrations on Camel. A Exchange can have 2 messages inside, one representing the input and another one representing the output of a integration.

The output message on Camel is optional, since we could have a integration that doesn’t have a response. Also, a Exchange can have properties, represented as key-value entries, that can be used as data that will be used across the whole route (we will see more about routes very soon)

Message

Messages are the data itself that is transferred inside a Camel route. A Message can have a body, which is the data itself and headers, which are, like properties on a Exchange, key-value entries that can be used along the processing.

One important aspect to keep in mind, however, is that along a Camel route our Messages are changed – when we convert the body with a Type converter, for instance – and when this happens, we lose all our headers. So, Message headers must be seen as ephemeral data, that will not be used through the whole route. For that type of data, it is better to use Exchange properties.

The Message body can de made of several types of data, such as binaries, JSON, etc.

Camel context

The Camel context is the runtime container where Camel runs it. It initializes type converters, routes, endpoints, EIPs etc.

A Camel context has 3 possible status: started, suspended and stopped. When started, the context will serve the routes processing as normal.

When on suspended status, the Camel context will stop the processing – after the Exchanges already on processing are completed – , but keep all the caches, resources etc still loaded. A suspended context can be restarted.

Finally, there’s the stop status. When stopped, the context will stop the processing like the suspended status, but also will release all the resources caches etc, making a complete shutdown. As with the suspended status, Camel will also guarantee that all the Exchanges being processing will be finished before the shutdown.

Route

Routes on Camel are the heart of the processing. It consists of a flow, that start on a endpoint, pass through a stream of processors/convertors and finishes on another endpoint. it is possible to chain routes by calling another route as the final endpoint of a previous route.

A route can also use other features, such as EIPs, asynchronous and parallel processing.

Channel

When Camel executes a route, the controller in which it executes the route is called Channel.

A Channel is responsible for chaining the processors execution, passing the Exchange from one to another, alongside monitoring the route execution. It also allow us to implement interceptors to run any logic on some route’s events, such as when a Exchange is going to a specific Endpoint.

Processor

Processors are the primary extension points on Camel. By creating classes that extend the org.apache.camel.Processor interface, we create programming units that we can use to include our own code on a Camel route, inside a convenient execute method.

Component

A Component act like a factory to instantiate Endpoints for our use. We don’t directly use a Component, we reference instead by defining a Endpoint URI, that makes Camel infer about the Component that it needs to be using in order to create the Endpoint.

Camel provides dozens of Components, from file to JMS, AWS Connections to their products and so on.

Registry

In order to utilize beans from IoC systems, such as OSGi, Spring and JNDI, Camel supplies us with a Bean Registry. The Registry’s mission is to supply the beans referred on Camel routes with the ones create on his associated context, such as a OSGi container, a Spring context etc

Type converter

Type converters, as the name implies, are used in order to convert the body of a message from one type to another. The uses for a converter are varied, ranging from converting a binary format to a String to converting XML to JSON.

We can create our own Type Converter by extending the org.apache.camel.TypeConverter  interface. After creating our own Converter by extending the interface, we need to register it on the Type’s Converter Registry.

Endpoint

A Endpoint is the entity responsible for communicating a Camel Route in or out of his execution process. It comprises several types of sources and destinations as mentioned before, such as SQS, files, Relational Databases and so on. A Endpoint is instantiated and configured by providing a URI to a Camel Route, following the pattern below:

component:option?option1=value1&option2=value2

We can create our own Components by extending the org.apache.camel.Endpoint interface. When extending the interface, we need to override 3 methods, where we supply the logic to create a polling consumer Endpoint, a passive consumer Endpoint and a producer Endpoint.

Lab

So, without further delay, let’s start our lab! On this lab, we will create a route that polls access files from a access log style file, sends the logs to a SQS and backups the file on a S3.

Setup

The setup for our lab is pretty simple: It is a Spring Boot application, configured to work with Camel. Our Gradle.build file is as follows:

apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'org.springframework.boot'
apply plugin: 'maven'
apply plugin: 'idea'

jar {
    baseName = 'apache-camel-handson'
    version = '1.0'
}

project.ext {
    springBootVersion = '1.5.4.RELEASE'
    camelVersion = '2.18.3'

}

sourceCompatibility = 1.8
targetCompatibility = 1.8

repositories {
    mavenLocal()
    mavenCentral()
}

bootRun {
    systemProperties = System.properties
}


dependencies {

    compile group: 'org.apache.camel', name: 'camel-spring-boot-starter', version: camelVersion
    compile group: 'org.apache.camel', name: 'camel-commands-spring-boot', version: camelVersion
    compile group: 'org.apache.camel',name: 'camel-aws', version: camelVersion
    compile group: 'org.apache.camel',name: 'camel-mail', version: camelVersion
    compile group: 'org.springframework.boot', name: 'spring-boot-autoconfigure', version: springBootVersion
    

}
group 'com.alexandreesl.handson'
version '1.0'

buildscript {
    repositories {
        mavenLocal()
        maven {
            url "https://plugins.gradle.org/m2/"
        }
        mavenCentral()
    }
    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.5.4.RELEASE")
    }
}

And the Java main file is a simple Java Spring Boot Application file, as follows:

package com.alexandreesl.handson;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.ComponentScan;

/**
 * Created by alexandrelourenco on 28/06/17.
 */

@ComponentScan(basePackages = {"com.alexandreesl.handson"})
@SpringBootApplication
@EnableAutoConfiguration
public class ApacheCamelHandsonApp {

    public static void main(String[] args) {
        SpringApplication.run(ApacheCamelHandsonApp.class, args);
    }

}

We also configure a configuration class, where we will register a type converter that we will create on the next section:

package com.alexandreesl.handson.configuration;

import org.apache.camel.CamelContext;
import org.apache.camel.spring.boot.CamelContextConfiguration;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class CamelConfiguration {


    @Bean
    public CamelContextConfiguration camelContextConfiguration() {

        return new CamelContextConfiguration() {

            @Override
            public void beforeApplicationStart(CamelContext camelContext) {
               

            }

            @Override
            public void afterApplicationStart(CamelContext camelContext) {

            }

        };

    }

}

We also create a configuration which will create a AmazonS3Client and AmazonSQSClient, that will be used by the AWS-S3 and AWS-SQS Camel endpoints:

package com.alexandreesl.handson.configuration;

import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.internal.StaticCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.sqs.AmazonSQSClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.env.Environment;

@Configuration
public class AWSConfiguration {


    @Autowired
    private Environment environment;

    @Bean(name = "s3Client")
    public AmazonS3Client s3Client() {
        return new AmazonS3Client(staticCredentialsProvider()).withRegion(Regions.fromName("us-east-1"));
    }

    @Bean(name = "sqsClient")
    public AmazonSQSClient sqsClient() {
        return new AmazonSQSClient(staticCredentialsProvider()).withRegion(Regions.fromName("us-east-1"));
    }

    @Bean
    public StaticCredentialsProvider staticCredentialsProvider() {
        return new StaticCredentialsProvider(new BasicAWSCredentials("<access key>", "<secret access key>"));
    }

}

PS: this lab assumes that the reader is familiar with AWS and already has a account. For the lab, a bucket called “apache-camel-handson” and a SQS called “MyInputQueue” were created.

Configuring the route

Now that we have our Camel environment set up, let’s begin creating our route. First, we create a type converter called “StringToAccessLogDTOConverter” with the following code:

package com.alexandreesl.handson.converters;

import com.alexandreesl.handson.dto.AccessLogDTO;
import org.apache.camel.Converter;
import org.apache.camel.TypeConverters;

import java.util.StringTokenizer;

/**
 * Created by alexandrelourenco on 30/06/17.
 */

public class StringToAccessLogDTOConverter implements TypeConverters {

    @Converter
    public AccessLogDTO convert(String row) {

        AccessLogDTO dto = new AccessLogDTO();

        StringTokenizer tokens = new StringTokenizer(row);

        dto.setIp(tokens.nextToken());
        dto.setUrl(tokens.nextToken());
        dto.setHttpMethod(tokens.nextToken());
        dto.setDuration(Long.parseLong(tokens.nextToken()));

        return dto;

    }

}

Next, we change our Camel configuration, registering the converter:

package com.alexandreesl.handson.configuration;

import com.alexandreesl.handson.converters.StringToAccessLogDTOConverter;
import org.apache.camel.CamelContext;
import org.apache.camel.spring.boot.CamelContextConfiguration;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class CamelConfiguration {


    @Bean
    public CamelContextConfiguration camelContextConfiguration() {

        return new CamelContextConfiguration() {

            @Override
            public void beforeApplicationStart(CamelContext camelContext) {

                camelContext.getTypeConverterRegistry().addTypeConverters(new StringToAccessLogDTOConverter());

            }

            @Override
            public void afterApplicationStart(CamelContext camelContext) {

            }

        };

    }

}

Our converter reads a String and converts to a DTO, with the following attributes:

package com.alexandreesl.handson.dto;

/**
 * Created by alexandrelourenco on 30/06/17.
 */
public class AccessLogDTO {

    private String ip;

    private String url;

    private String httpMethod;

    private long duration;

    public String getIp() {
        return ip;
    }

    public void setIp(String ip) {
        this.ip = ip;
    }

    public String getUrl() {
        return url;
    }

    public void setUrl(String url) {
        this.url = url;
    }

    public String getHttpMethod() {
        return httpMethod;
    }

    public void setHttpMethod(String httpMethod) {
        this.httpMethod = httpMethod;
    }

    public long getDuration() {
        return duration;
    }

    public void setDuration(long duration) {
        this.duration = duration;
    }

    @Override
    public String toString() {

        StringBuffer buffer = new StringBuffer();
        buffer.append("[");
        buffer.append(ip);
        buffer.append(",");
        buffer.append(url);
        buffer.append(",");
        buffer.append(httpMethod);
        buffer.append(",");
        buffer.append(duration);
        buffer.append("]");

        return buffer.toString();

    }
}

Finally, we have our route, defined on our RouteBuilder:

package com.alexandreesl.handson.routes;

import com.alexandreesl.handson.dto.AccessLogDTO;
import org.apache.camel.LoggingLevel;
import org.apache.camel.spring.SpringRouteBuilder;
import org.springframework.context.annotation.Configuration;

/**
 * Created by alexandrelourenco on 30/06/17.
 */

@Configuration
public class MyFirstCamelRoute extends SpringRouteBuilder {


    @Override
    public void configure() throws Exception {

        from("file:/Users/alexandrelourenco/Documents/apachecamelhandson?delay=1000&charset=utf-8&delete=true")
                .setHeader("CamelAwsS3Key", header("CamelFileName"))
                .to("aws-s3:arn:aws:s3:::apache-camel-handson?amazonS3Client=#s3Client")
                .convertBodyTo(String.class)
                .split().tokenize("\n")
                    .convertBodyTo(AccessLogDTO.class)
                    .log(LoggingLevel.INFO, "${body}")
                    .to("aws-sqs://MyInputQueue?amazonSQSClient=#sqsClient");

    }
}

On the route above, we define a file endpoint that will poll for files on a folder, each 1 second and remove the file if the processing is completed successfully. Then we send the file to Amazon using S3 as a backup storage.

Next, we split the file using a splitter, that generates a string for each line of the file. For each line we convert the line to a DTO, log the data and finally we send the data to a SQS.

Now that we have our code done, let’s run it!

Running

First, we start our Camel route. To do this, we simply run the main Spring Boot class, as we would do with any common Java program.

After firing up Spring Boot, we would receive on our console the output that the route was successful started:

. ____ _ __ _ _
 /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/ ___)| |_)| | | | | || (_| | ) ) ) )
 ' |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot :: (v1.5.4.RELEASE)

2017-07-01 12:52:02.224 INFO 3042 --- [ main] c.a.handson.ApacheCamelHandsonApp : Starting ApacheCamelHandsonApp on Alexandres-MacBook-Pro.local with PID 3042 (/Users/alexandrelourenco/Applications/git/apache-camel-handson/build/classes/main started by alexandrelourenco in /Users/alexandrelourenco/Applications/git/apache-camel-handson)
2017-07-01 12:52:02.228 INFO 3042 --- [ main] c.a.handson.ApacheCamelHandsonApp : No active profile set, falling back to default profiles: default
2017-07-01 12:52:02.415 INFO 3042 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@475e586c: startup date [Sat Jul 01 12:52:02 BRT 2017]; root of context hierarchy
2017-07-01 12:52:03.325 INFO 3042 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.apache.camel.spring.boot.CamelAutoConfiguration' of type [org.apache.camel.spring.boot.CamelAutoConfiguration$$EnhancerBySpringCGLIB$$72a2a9b] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2017-07-01 12:52:09.520 INFO 3042 --- [ main] o.a.c.i.converter.DefaultTypeConverter : Loaded 192 type converters
2017-07-01 12:52:09.612 INFO 3042 --- [ main] roperties$SimpleAuthenticationProperties :

Using default password for shell access: b738eab1-6577-4f9b-9a98-2f12eae59828




2017-07-01 12:52:15.463 WARN 3042 --- [ main] tarterDeprecatedWarningAutoConfiguration : spring-boot-starter-remote-shell is deprecated as of Spring Boot 1.5 and will be removed in Spring Boot 2.0
2017-07-01 12:52:15.511 INFO 3042 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2017-07-01 12:52:15.519 INFO 3042 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 0
2017-07-01 12:52:15.615 INFO 3042 --- [ main] o.a.camel.spring.boot.RoutesCollector : Loading additional Camel XML routes from: classpath:camel/*.xml
2017-07-01 12:52:15.615 INFO 3042 --- [ main] o.a.camel.spring.boot.RoutesCollector : Loading additional Camel XML rests from: classpath:camel-rest/*.xml
2017-07-01 12:52:15.616 INFO 3042 --- [ main] o.a.camel.spring.SpringCamelContext : Apache Camel 2.18.3 (CamelContext: camel-1) is starting
2017-07-01 12:52:15.618 INFO 3042 --- [ main] o.a.c.m.ManagedManagementStrategy : JMX is enabled
2017-07-01 12:52:25.695 INFO 3042 --- [ main] o.a.c.i.DefaultRuntimeEndpointRegistry : Runtime endpoint registry is in extended mode gathering usage statistics of all incoming and outgoing endpoints (cache limit: 1000)
2017-07-01 12:52:25.810 INFO 3042 --- [ main] o.a.camel.spring.SpringCamelContext : StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
2017-07-01 12:52:27.853 INFO 3042 --- [ main] o.a.camel.spring.SpringCamelContext : Route: route1 started and consuming from: file:///Users/alexandrelourenco/Documents/apachecamelhandson?charset=utf-8&delay=1000&delete=true
2017-07-01 12:52:27.854 INFO 3042 --- [ main] o.a.camel.spring.SpringCamelContext : Total 1 routes, of which 1 are started.
2017-07-01 12:52:27.854 INFO 3042 --- [ main] o.a.camel.spring.SpringCamelContext : Apache Camel 2.18.3 (CamelContext: camel-1) started in 12.238 seconds
2017-07-01 12:52:27.858 INFO 3042 --- [ main] c.a.handson.ApacheCamelHandsonApp : Started ApacheCamelHandsonApp in 36.001 seconds (JVM running for 36.523)

PS: Don’t forget it to replace the access key and secret with your own!

Now, to test it, we place a file on the polling folder. For testing, we create a file like the following:

10.12.64.3 /api/v1/test1 POST 123
10.12.67.3 /api/v1/test2 PATCH 125
10.15.64.3 /api/v1/test3 GET 166
10.120.64.23 /api/v1/test1 POST 100

We put a file with the content on the folder and after 1 second, the file is gone! Where did it go?

If we check the Amazon S3 bucket interface, we will see that the file was created on the storage:

 

Screen Shot 2017-07-01 at 13.12.39

And if we check the Amazon SQS interface, we will see 4 messages on the queue, proving that our integration is a success:

Screen Shot 2017-07-01 at 13.38.12

If we check the messages, we will see that Camel correctly parsed the information from the file, as we can see on the example bellow:

[10.12.64.3,/api/v1/test1,POST,123]

Implementing Error Handling

On Camel, we can implement logic designed for handling errors. These are done by defining routes as well, which inputs are the exceptions fired by the routes.

On our lab, let’s implement a error handling. First, we add a option on the file endpoint that makes the file to be moved to a .error folder when a error occurs, and then we send a email to ourselves to alert of the failure. we can do this by changing the route as follows:

package com.alexandreesl.handson.routes;

import com.alexandreesl.handson.dto.AccessLogDTO;
import org.apache.camel.LoggingLevel;
import org.apache.camel.spring.SpringRouteBuilder;
import org.springframework.context.annotation.Configuration;

/**
 * Created by alexandrelourenco on 30/06/17.
 */

@Configuration
public class MyFirstCamelRoute extends SpringRouteBuilder {


    @Override
    public void configure() throws Exception {

        onException(Exception.class)
                .handled(false)
                .log(LoggingLevel.ERROR, "An Error processing the file!")
                .to("smtps://smtp.gmail.com:465?password=xxxxxxxxxxxxxxxx&username=alexandreesl@gmail.com&subject=A error has occurred!");

        from("file:/Users/alexandrelourenco/Documents/apachecamelhandson?delay=1000&charset=utf-8&delete=true&moveFailed=.error")
                .setHeader("CamelAwsS3Key", header("CamelFileName"))
                .to("aws-s3:arn:aws:s3:::apache-camel-handson?amazonS3Client=#s3Client")
                .convertBodyTo(String.class)
                .split().tokenize("\n")
                    .convertBodyTo(AccessLogDTO.class)
                    .log(LoggingLevel.INFO, "${body}")
                    .to("aws-sqs://MyInputQueue?amazonSQSClient=#sqsClient");

    }
}

Then, we restart the route and feed up a file like the following, that will cause a parse exception:

10.12.64.3 /api/v1/test1 POST 123
10.12.67.3 /api/v1/test2 PATCH 125
10.15.64.3 /api/v1/test3 GET 166
10.120.64.23 /api/v1/test1 POST 10a

After the processing, we can see the console and watch how the error was handled:

2017-07-01 14:18:48.695  INFO 3230 --- [           main] o.a.camel.spring.SpringCamelContext      : Apache Camel 2.18.3 (CamelContext: camel-1) started in 11.899 seconds2017-07-01 14:18:48.695  INFO 3230 --- [           main] o.a.camel.spring.SpringCamelContext      : Apache Camel 2.18.3 (CamelContext: camel-1) started in 11.899 seconds2017-07-01 14:18:48.699  INFO 3230 --- [           main] c.a.handson.ApacheCamelHandsonApp        : Started ApacheCamelHandsonApp in 35.612 seconds (JVM running for 36.052)2017-07-01 14:18:52.737  WARN 3230 --- [checamelhandson] c.amazonaws.services.s3.AmazonS3Client   : No content length specified for stream data.  Stream contents will be buffered in memory and could result in out of memory errors.2017-07-01 14:18:53.105  INFO 3230 --- [checamelhandson] route1                                   : [10.12.64.3,/api/v1/test1,POST,123]2017-07-01 14:18:53.294  INFO 3230 --- [checamelhandson] route1                                   : [10.12.67.3,/api/v1/test2,PATCH,125]2017-07-01 14:18:53.504  INFO 3230 --- [checamelhandson] route1                                   : [10.15.64.3,/api/v1/test3,GET,166]2017-07-01 14:18:53.682 ERROR 3230 --- [checamelhandson] route1                                   : An Error processing the file!2017-07-01 14:19:02.058 ERROR 3230 --- [checamelhandson] o.a.camel.processor.DefaultErrorHandler  : Failed delivery for (MessageId: ID-Alexandres-MacBook-Pro-local-52251-1498929510223-0-9 on ExchangeId: ID-Alexandres-MacBook-Pro-local-52251-1498929510223-0-10). Exhausted after delivery attempt: 1 caught: org.apache.camel.InvalidPayloadException: No body available of type: com.alexandreesl.handson.dto.AccessLogDTO but has value: 10.120.64.23 /api/v1/test1 POST 10a of type: java.lang.String on: Message[ID-Alexandres-MacBook-Pro-local-52251-1498929510223-0-9]. Caused by: Error during type conversion from type: java.lang.String to the required type: com.alexandreesl.handson.dto.AccessLogDTO with value 10.120.64.23 /api/v1/test1 POST 10a due java.lang.NumberFormatException: For input string: "10a". Exchange[ID-Alexandres-MacBook-Pro-local-52251-1498929510223-0-10]. Caused by: [org.apache.camel.TypeConversionException - Error during type conversion from type: java.lang.String to the required type: com.alexandreesl.handson.dto.AccessLogDTO with value 10.120.64.23 /api/v1/test1 POST 10a due java.lang.NumberFormatException: For input string: "10a"]. Processed by failure processor: FatalFallbackErrorHandler[Pipeline[[Channel[Log(route1)[An Error processing the file!]], Channel[sendTo(smtps://smtp.gmail.com:465?password=xxxxxx&subject=A+error+has+occurred%21&username=alexandreesl%40gmail.com)]]]]
Message History---------------------------------------------------------------------------------------------------------------------------------------RouteId              ProcessorId          Processor                                                                        Elapsed (ms)[route1            ] [route1            ] [file:///Users/alexandrelourenco/Documents/apachecamelhandson?charset=utf-8&del] [      9323][route1            ] [convertBodyTo2    ] [convertBodyTo[com.alexandreesl.handson.dto.AccessLogDTO]                      ] [      8370][route1            ] [log1              ] [log                                                                           ] [         1][route1            ] [to1               ] [smtps://smtp.gmail.com:xxxxxx@gmail.com&subject=A error ha                    ] [      8366]
Stacktrace---------------------------------------------------------------------------------------------------------------------------------------
org.apache.camel.InvalidPayloadException: No body available of type: com.alexandreesl.handson.dto.AccessLogDTO but has value: 10.120.64.23 /api/v1/test1 POST 10a of type: java.lang.String on: Message[ID-Alexandres-MacBook-Pro-local-52251-1498929510223-0-9]. Caused by: Error during type conversion from type: java.lang.String to the required type: com.alexandreesl.handson.dto.AccessLogDTO with value 10.120.64.23 /api/v1/test1 POST 10a due java.lang.NumberFormatException: For input string: "10a". Exchange[ID-Alexandres-MacBook-Pro-local-52251-1498929510223-0-10]. Caused by: [org.apache.camel.TypeConversionException - Error during type conversion from type: java.lang.String to the required type: com.alexandreesl.handson.dto.AccessLogDTO with value 10.120.64.23 /api/v1/test1 POST 10a due java.lang.NumberFormatException: For input string: "10a"] at org.apache.camel.impl.MessageSupport.getMandatoryBody(MessageSupport.java:107) ~[camel-core-2.18.3.jar:2.18.3] at org.apache.camel.processor.ConvertBodyProcessor.process(ConvertBodyProcessor.java:91) ~[camel-core-2.18.3.jar:2.18.3] at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77) [camel-core-2.18.3.jar:2.18.3] at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:542) [camel-core-2.18.3.jar:2.18.3] at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:197) [camel-core-2.18.3.jar:2.18.3]

If we look to the folder, we will see that a .error folder was created and the file was moved to the folder:

Screen Shot 2017-07-01 at 14.25.18

And if we check the mailbox, we will see that we received the failure alert:

Screen Shot 2017-07-01 at 14.27.40

Conclusion

And so we conclude our tour through Apache Camel. With a easy-to-use architecture and dozens of components, it is a highly pluggable and robust option on integration developing. Thank you for following me on this post, until next time.

Mockito & DBUnit: Implementing a mocking structure focused and independent to your automated tests on Java

Standard

Hi, my dear readers! Welcome to my blog. On this post, we will make a hands-on about Mockito and DBUnit, two libraries from Java’s open source ecosystem which can help us in improving our JUnit tests on focus and independence. But why mocking is so important on our unit tests?

Focusing the tests

Let’s imagine a Java back-end application with a tier-like architecture. On this application, we  could have 2 tiers:

  • The service tier, which have the business rules and make as a interface for the front-end;
  • The entity tier, which have the logic responsible for making calls to a database, utilizing techonologies like JDBC or JPA;

Of course, on a architecture of this kind, we will have the following dependence of our tiers:

Service >>> Entity

On this kind of architecture, the most common way of building our automated tests is by creating JUnit Test Classes which test each tier independently, thus we can make running tests that reflect  only the correctness of the tier we want to test. However, if we simply create the classes without any mocking, we will got problems like the following:

  • On the JUnit tests of our service tier, for example, if we have a problem on the entity tier, we will have also our tests failed, because the error from the entity tier will reverberate across the tiers;
  • If we have a project where different teams are working on the same system, and one team is responsible for the construction of the service tier, while the other is responsible for the construction of the entity tier, we will have a dependency of one team with the other before the tests could be made;

To resolve such issues, we could mock the entity tier on the service tier’s unit test classes, so we can have independence and focus of our tests on the service tier, which it belongs.

independence

One point that it is specially important when we make our JUnit test classes in the independence department is the entity tier. Since in our example this tier is focused in the connection and running of SQL commands on a database, it makes a break on our independence goal, since we will need a database so we can run our tests. Not only that, if a test breaks any structure that it is  used by the subsequent tests, all of them will also fail. It is on this point that enters our other library, DBUnit.

With DBUnit, we can use embedded databases, such as HSQLDB, to make our database exclusive to the running of our tests.

So, without further delay, let’s begin our hands-on!

Hands-on

For this lab, we will create a basic CRUD for a Client entity. The structure will follow the simple example we talked about previously, with the DAO (entity) and Service tiers. We will use DBUnit and JUnit to test the DAO tier, and Mockito with JUnit to test the Service tier. First, let’s create a Maven project, without any archetype and include the following dependencies on pom.xml:

.

.

.

<dependencies>

<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
</dependency>

<dependency>
<groupId>org.dbunit</groupId>
<artifactId>dbunit</artifactId>
<version>2.5.0</version>
</dependency>

<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-all</artifactId>
<version>1.10.19</version>
</dependency>

<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>4.3.8.Final</version>
</dependency>

<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>hsqldb</artifactId>
<version>2.3.2</version>
</dependency>

<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>4.1.4.RELEASE</version>
</dependency>

<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>4.1.5.RELEASE</version>
</dependency>

<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-test</artifactId>
<version>4.1.5.RELEASE</version>
</dependency>

<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-tx</artifactId>
<version>4.1.5.RELEASE</version>
</dependency>

<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-orm</artifactId>
<version>4.1.5.RELEASE</version>
</dependency>

</dependencies>

.

.

.

On the previous snapshot, we included not only the Mockito, DBUnit and JUnit libraries, but we also included Hibernate to implement the persistence layer and Spring 4 to use the IoC container and the transaction management. We also included the Spring Test library, which includes some features that we will use later on this lab. Finally, to simplify the setup and remove the need of installing a database to run the code, we will use HSQLDB as our database.

Our lab will have the following structure:

  • One class will represent the application itself, as a standalone class, where we will consume the tiers, like a real application would do;
  • We will have another 2 classes, each one with JUnit tests, that will test each tier independently;

First, we define a persistence unit, where we define the name of the unit and the properties to make Hibernate create the table for us and populate her with some initial rows. The code of the persistence.xml can be seen bellow:

<?xml version=”1.0″ encoding=”UTF-8″?>
<persistence xmlns=”http://java.sun.com/xml/ns/persistence&#8221;
version=”1.0″>
<persistence-unit name=”persistence” transaction-type=”RESOURCE_LOCAL”>
<class>com.alexandreesl.handson.model.Client</class>

<properties>
<property name=”hibernate.hbm2ddl.auto” value=”create” />
<property name=”hibernate.hbm2ddl.import_files” value=”sql/import.sql” />
</properties>

</persistence-unit>
</persistence>

And the initial data to populate the table can be seen bellow:

insert into Client(id,name,sex, phone) values (1,’Alexandre Eleuterio Santos Lourenco’,’M’,’22323456′);
insert into Client(id,name,sex, phone) values (2,’Lucebiane Santos Lourenco’,’F’,’22323876′);
insert into Client(id,name,sex, phone) values (3,’Maria Odete dos Santos Lourenco’,’F’,’22309456′);
insert into Client(id,name,sex, phone) values (4,’Eleuterio da Silva Lourenco’,’M’,’22323956′);
insert into Client(id,name,sex, phone) values (5,’Ana Carolina Fernandes do Sim’,’F’,’22123456′);

In order to not making the post burdensome, we will not discuss the project structure during the lab, but just show the final structure at the end. The code can be found on a Github repository, at the end of the post.

With the persistence unit defined, we can start coding! First, we create the entity class:

package com.alexandreesl.handson.model;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Table;

@Table(name = “Client”)
@Entity
public class Client {

@Id
private long id;

@Column(name = “name”, nullable = false, length = 50)
private String name;

@Column(name = “sex”, nullable = false)
private String sex;

@Column(name = “phone”, nullable = false)
private long phone;

public long getId() {
return id;
}

public void setId(long id) {
this.id = id;
}

public String getName() {
return name;
}

public void setName(String name) {
this.name = name;
}

public String getSex() {
return sex;
}

public void setSex(String sex) {
this.sex = sex;
}

public long getPhone() {
return phone;
}

public void setPhone(long phone) {
this.phone = phone;
}

}

In order to create the persistence-related beans to enable Hibernate and the transaction manager, alongside all the rest of the beans necessary for the application, we use a Java-based Spring configuration class. The code of the class can be seen bellow:

package com.alexandreesl.handson.core;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.jdbc.datasource.DriverManagerDataSource;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean;
import org.springframework.orm.jpa.vendor.Database;
import org.springframework.orm.jpa.vendor.HibernateJpaDialect;
import org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter;
import org.springframework.transaction.annotation.EnableTransactionManagement;

@Configuration
@EnableTransactionManagement
@ComponentScan({ “com.alexandreesl.handson.dao”,
“com.alexandreesl.handson.service” })
public class AppConfiguration {

@Bean
public DriverManagerDataSource dataSource() {

DriverManagerDataSource dataSource = new DriverManagerDataSource();

dataSource.setDriverClassName(“org.hsqldb.jdbcDriver”);
dataSource.setUrl(“jdbc:hsqldb:mem://standalone”);
dataSource.setUsername(“sa”);
dataSource.setPassword(“”);

return dataSource;
}

@Bean
public JpaTransactionManager transactionManager() {

JpaTransactionManager transactionManager = new JpaTransactionManager();

transactionManager.setEntityManagerFactory(entityManagerFactory()
.getNativeEntityManagerFactory());
transactionManager.setDataSource(dataSource());
transactionManager.setJpaDialect(jpaDialect());

return transactionManager;
}

@Bean
public HibernateJpaDialect jpaDialect() {
return new HibernateJpaDialect();
}

@Bean
public HibernateJpaVendorAdapter jpaVendorAdapter() {
HibernateJpaVendorAdapter jpaVendor = new HibernateJpaVendorAdapter();

jpaVendor.setDatabase(Database.HSQL);
jpaVendor.setDatabasePlatform(“org.hibernate.dialect.HSQLDialect”);

return jpaVendor;

}

@Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {

LocalContainerEntityManagerFactoryBean entityManagerFactory = new LocalContainerEntityManagerFactoryBean();

entityManagerFactory
.setPersistenceXmlLocation(“classpath:META-INF/persistence.xml”);
entityManagerFactory.setPersistenceUnitName(“persistence”);
entityManagerFactory.setDataSource(dataSource());
entityManagerFactory.setJpaVendorAdapter(jpaVendorAdapter());
entityManagerFactory.setJpaDialect(jpaDialect());

return entityManagerFactory;

}

}

And finally, we create the classes that represent the tiers itself. This is the DAO class:

package com.alexandreesl.handson.dao;

import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;

import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;

import com.alexandreesl.handson.model.Client;

@Component
public class ClientDAO {

@PersistenceContext
private EntityManager entityManager;

@Transactional(readOnly = true)
public Client find(long id) {

return entityManager.find(Client.class, id);

}

@Transactional
public void create(Client client) {

entityManager.persist(client);

}

@Transactional
public void update(Client client) {

entityManager.merge(client);

}

@Transactional
public void delete(Client client) {

entityManager.remove(client);

}

}

And this is the service class:

 package com.alexandreesl.handson.service;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import com.alexandreesl.handson.dao.ClientDAO;
import com.alexandreesl.handson.model.Client;

@Component
public class ClientService {

@Autowired
private ClientDAO clientDAO;

public ClientDAO getClientDAO() {
return clientDAO;
}

public void setClientDAO(ClientDAO clientDAO) {
this.clientDAO = clientDAO;
}

public Client find(long id) {

return clientDAO.find(id);

}

public void create(Client client) {

clientDAO.create(client);

}

public void update(Client client) {

clientDAO.update(client);

}

public void delete(Client client) {

clientDAO.delete(client);

}

}

The reader may notice that we created a getter/setter to the DAO class on the Service class. This is not necessary for the Spring injection, but we made this way to get easier to change the real DAO by a Mockito’s mock on the tests class. Finally, we code the class we talked about previously, the one that consume the tiers:

package com.alexandreesl.handson.core;

import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;

import com.alexandreesl.handson.model.Client;
import com.alexandreesl.handson.service.ClientService;

public class App {

public static void main(String[] args) {

ApplicationContext context = new AnnotationConfigApplicationContext(
AppConfiguration.class);

ClientService service = (ClientService) context
.getBean(ClientService.class);

System.out.println(service.find(1).getName());

System.out.println(service.find(3).getName());

System.out.println(service.find(5).getName());

Client client = new Client();

client.setId(6);
client.setName(“Celina do Sim”);
client.setPhone(44657688);
client.setSex(“F”);

service.create(client);

System.out.println(service.find(6).getName());

System.exit(0);

}

}

If we run the class, we can see that the console print all the clients we searched for and that Hibernate is initialized properly, proving our implementation is a success:

Mar 28, 2015 1:09:22 PM org.springframework.context.annotation.AnnotationConfigApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@6433a2: startup date [Sat Mar 28 13:09:22 BRT 2015]; root of context hierarchy
Mar 28, 2015 1:09:22 PM org.springframework.jdbc.datasource.DriverManagerDataSource setDriverClassName
INFO: Loaded JDBC driver: org.hsqldb.jdbcDriver
Mar 28, 2015 1:09:22 PM org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean createNativeEntityManagerFactory
INFO: Building JPA container EntityManagerFactory for persistence unit ‘persistence’
Mar 28, 2015 1:09:22 PM org.hibernate.jpa.internal.util.LogHelper logPersistenceUnitInformation
INFO: HHH000204: Processing PersistenceUnitInfo [
name: persistence
…]
Mar 28, 2015 1:09:22 PM org.hibernate.Version logVersion
INFO: HHH000412: Hibernate Core {4.3.8.Final}
Mar 28, 2015 1:09:22 PM org.hibernate.cfg.Environment <clinit>
INFO: HHH000206: hibernate.properties not found
Mar 28, 2015 1:09:22 PM org.hibernate.cfg.Environment buildBytecodeProvider
INFO: HHH000021: Bytecode provider name : javassist
Mar 28, 2015 1:09:22 PM org.hibernate.annotations.common.reflection.java.JavaReflectionManager <clinit>
INFO: HCANN000001: Hibernate Commons Annotations {4.0.5.Final}
Mar 28, 2015 1:09:23 PM org.hibernate.dialect.Dialect <init>
INFO: HHH000400: Using dialect: org.hibernate.dialect.HSQLDialect
Mar 28, 2015 1:09:23 PM org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory <init>
INFO: HHH000397: Using ASTQueryTranslatorFactory
Mar 28, 2015 1:09:23 PM org.hibernate.tool.hbm2ddl.SchemaExport execute
INFO: HHH000227: Running hbm2ddl schema export
Mar 28, 2015 1:09:23 PM org.hibernate.tool.hbm2ddl.SchemaExport execute
INFO: HHH000230: Schema export complete
Alexandre Eleuterio Santos Lourenco
Maria Odete dos Santos Lourenco
Ana Carolina Fernandes do Sim
Celina do Sim

Now, let’s move on for the tests themselves. For the DBUnit tests, we create a Base class, which will provide the base DB operations which all of our JUnit tests will benefit. On the @PostConstruct method, which is fired after all the injections of the Spring context are made – reason why we couldn’t use the @BeforeClass annotation, because we need Spring to instantiate and inject the EntityManager first – we use DBUnit to make a connection to our database, with the class DatabaseConnection and populate the table using the DataSet class we created, passing a XML structure that represents the data used on the tests.

This operation of populating the table is made by the DatabaseOperation class, which we use with the CLEAN_INSERT operation, that truncate the table first and them insert the data on the dataset. Finally, we use one of JUnit’s event listeners, the @After event, which is called after every test case. On our scenario, we use this event to call the clear() method on the EntityManager, which forces Hibernate to query against the Database for the first time at every test case, thus eliminating possible problems we could have between our test cases because of data that it is different on the second level cache than it is on the DB.

The code for the base class is the following:

package com.alexandreesl.handson.dao.test;

import java.io.InputStream;
import java.sql.SQLException;

import javax.annotation.PostConstruct;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.PersistenceUnit;

import org.dbunit.DatabaseUnitException;
import org.dbunit.database.DatabaseConfig;
import org.dbunit.database.DatabaseConnection;
import org.dbunit.database.IDatabaseConnection;
import org.dbunit.dataset.IDataSet;
import org.dbunit.dataset.xml.FlatXmlDataSetBuilder;
import org.dbunit.ext.hsqldb.HsqldbDataTypeFactory;
import org.dbunit.operation.DatabaseOperation;
import org.hibernate.HibernateException;
import org.hibernate.internal.SessionImpl;
import org.junit.After;

public class BaseDBUnitSetup {

private static IDatabaseConnection connection;
private static IDataSet dataset;

@PersistenceUnit
public EntityManagerFactory entityManagerFactory;

private EntityManager entityManager;

@PostConstruct
public void init() throws HibernateException, DatabaseUnitException,
SQLException {

entityManager = entityManagerFactory.createEntityManager();

connection = new DatabaseConnection(
((SessionImpl) (entityManager.getDelegate())).connection());
connection.getConfig().setProperty(
DatabaseConfig.PROPERTY_DATATYPE_FACTORY,
new HsqldbDataTypeFactory());

FlatXmlDataSetBuilder flatXmlDataSetBuilder = new FlatXmlDataSetBuilder();
InputStream dataSet = Thread.currentThread().getContextClassLoader()
.getResourceAsStream(“test-data.xml”);
dataset = flatXmlDataSetBuilder.build(dataSet);

DatabaseOperation.CLEAN_INSERT.execute(connection, dataset);

}

@After
public void afterTests() {
entityManager.clear();
}

}

The xml structure used on the test cases is the following:

<?xml version=”1.0″ encoding=”UTF-8″?>
<dataset>
<Client id=”1″ name=”Alexandre Eleuterio Santos Lourenco” sex=”M” phone=”22323456″ />
<Client id=”2″ name=”Lucebiane Santos Lourenco” sex=”F” phone=”22323876″ />
</dataset>

And the code of our test class of the DAO tier is the following:

package com.alexandreesl.handson.dao.test;

import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.test.context.transaction.TransactionConfiguration;
import org.springframework.transaction.annotation.Transactional;

import com.alexandreesl.handson.core.test.AppTestConfiguration;
import com.alexandreesl.handson.dao.ClientDAO;
import com.alexandreesl.handson.model.Client;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = AppTestConfiguration.class)
@TransactionConfiguration(defaultRollback = true)
public class ClientDAOTest extends BaseDBUnitSetup {

@Autowired
private ClientDAO clientDAO;

@Test
public void testFind() {

Client client = clientDAO.find(1);

assertNotNull(client);

client = clientDAO.find(2);

assertNotNull(client);

client = clientDAO.find(3);

assertNull(client);

client = clientDAO.find(4);

assertNull(client);

client = clientDAO.find(5);

assertNull(client);

}

@Test
@Transactional
public void testInsert() {

Client client = new Client();

client.setId(3);
client.setName(“Celina do Sim”);
client.setPhone(44657688);
client.setSex(“F”);

clientDAO.create(client);

}

@Test
@Transactional
public void testUpdate() {

Client client = clientDAO.find(1);

client.setPhone(12345678);

clientDAO.update(client);

}

@Test
@Transactional
public void testRemove() {

Client client = clientDAO.find(1);

clientDAO.delete(client);

}

}

The code is very self explanatory so we will just focus on explaining the annotations at the top-level class. The @RunWith(SpringJUnit4ClassRunner.class) annotation changes the JUnit base class that runs our test cases, using rather one made by Spring that enable support of the IoC container and the Spring’s annotations. The @TransactionConfiguration(defaultRollback = true) annotation is from Spring’s test library and change the behavior of the @Transactional annotation, making the transactions to roll back after execution, instead of a commit. That ensures that our test cases wont change the structure of the DB, so a test case wont break the execution of his followers.

The reader may notice that we changed the configuration class to another one, exclusive for the test cases. It is essentially the same beans we created on the original configuration class, just changing the database bean to point to a different DB then the previously one, showing that we can change the database of our tests without breaking the code. On a real world scenario, the configuration class of the application would be pointing to a relational database like Oracle, DB2, etc and the test cases would use a embedded database such as HSQLDB, which we are using on this case:

package com.alexandreesl.handson.core.test;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.jdbc.datasource.DriverManagerDataSource;
import org.springframework.orm.jpa.JpaTransactionManager;
import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean;
import org.springframework.orm.jpa.vendor.Database;
import org.springframework.orm.jpa.vendor.HibernateJpaDialect;
import org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter;
import org.springframework.transaction.annotation.EnableTransactionManagement;

@Configuration
@EnableTransactionManagement
@ComponentScan({ “com.alexandreesl.handson.dao”,
“com.alexandreesl.handson.service” })
public class AppTestConfiguration {

@Bean
public DriverManagerDataSource dataSource() {

DriverManagerDataSource dataSource = new DriverManagerDataSource();

dataSource.setDriverClassName(“org.hsqldb.jdbcDriver”);
dataSource.setUrl(“jdbc:hsqldb:mem://standalone-test”);
dataSource.setUsername(“sa”);
dataSource.setPassword(“”);

return dataSource;
}

@Bean
public JpaTransactionManager transactionManager() {

JpaTransactionManager transactionManager = new JpaTransactionManager();

transactionManager.setEntityManagerFactory(entityManagerFactory()
.getNativeEntityManagerFactory());
transactionManager.setDataSource(dataSource());
transactionManager.setJpaDialect(jpaDialect());

return transactionManager;
}

@Bean
public HibernateJpaDialect jpaDialect() {
return new HibernateJpaDialect();
}

@Bean
public HibernateJpaVendorAdapter jpaVendorAdapter() {
HibernateJpaVendorAdapter jpaVendor = new HibernateJpaVendorAdapter();

jpaVendor.setDatabase(Database.HSQL);
jpaVendor.setDatabasePlatform(“org.hibernate.dialect.HSQLDialect”);

return jpaVendor;

}

@Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {

LocalContainerEntityManagerFactoryBean entityManagerFactory = new LocalContainerEntityManagerFactoryBean();

entityManagerFactory
.setPersistenceXmlLocation(“classpath:META-INF/persistence.xml”);
entityManagerFactory.setPersistenceUnitName(“persistence”);
entityManagerFactory.setDataSource(dataSource());
entityManagerFactory.setJpaVendorAdapter(jpaVendorAdapter());
entityManagerFactory.setJpaDialect(jpaDialect());

return entityManagerFactory;

}

}

If we run the test class, we can see that it runs the test cases successfully, showing that our code is a success. If we see the console, we can see that transactions were created and rolled back, respecting our configuration:

.

.

.

ar 28, 2015 2:29:55 PM org.springframework.test.context.transaction.TransactionContext startTransaction
INFO: Began transaction (1) for test context [DefaultTestContext@644abb8f testClass = ClientDAOTest, testInstance = com.alexandreesl.handson.dao.test.ClientDAOTest@1a411233, testMethod = testInsert@ClientDAOTest, testException = [null], mergedContextConfiguration = [MergedContextConfiguration@70325d20 testClass = ClientDAOTest, locations = ‘{}’, classes = ‘{class com.alexandreesl.handson.core.test.AppTestConfiguration}’, contextInitializerClasses = ‘[]’, activeProfiles = ‘{}’, propertySourceLocations = ‘{}’, propertySourceProperties = ‘{}’, contextLoader = ‘org.springframework.test.context.support.DelegatingSmartContextLoader’, parent = [null]]]; transaction manager [org.springframework.orm.jpa.JpaTransactionManager@7c2327fa]; rollback [true]
Mar 28, 2015 2:29:55 PM org.springframework.test.context.transaction.TransactionContext endTransaction
INFO: Rolled back transaction for test context [DefaultTestContext@644abb8f testClass = ClientDAOTest, testInstance = com.alexandreesl.handson.dao.test.ClientDAOTest@1a411233, testMethod = testInsert@ClientDAOTest, testException = [null], mergedContextConfiguration = [MergedContextConfiguration@70325d20 testClass = ClientDAOTest, locations = ‘{}’, classes = ‘{class com.alexandreesl.handson.core.test.AppTestConfiguration}’, contextInitializerClasses = ‘[]’, activeProfiles = ‘{}’, propertySourceLocations = ‘{}’, propertySourceProperties = ‘{}’, contextLoader = ‘org.springframework.test.context.support.DelegatingSmartContextLoader’, parent = [null]]].
Mar 28, 2015 2:29:55 PM org.springframework.test.context.transaction.TransactionContext startTransaction
INFO: Began transaction (1) for test context [DefaultTestContext@644abb8f testClass = ClientDAOTest, testInstance = com.alexandreesl.handson.dao.test.ClientDAOTest@2adddc06, testMethod = testRemove@ClientDAOTest, testException = [null], mergedContextConfiguration = [MergedContextConfiguration@70325d20 testClass = ClientDAOTest, locations = ‘{}’, classes = ‘{class com.alexandreesl.handson.core.test.AppTestConfiguration}’, contextInitializerClasses = ‘[]’, activeProfiles = ‘{}’, propertySourceLocations = ‘{}’, propertySourceProperties = ‘{}’, contextLoader = ‘org.springframework.test.context.support.DelegatingSmartContextLoader’, parent = [null]]]; transaction manager [org.springframework.orm.jpa.JpaTransactionManager@7c2327fa]; rollback [true]
Mar 28, 2015 2:29:55 PM org.springframework.test.context.transaction.TransactionContext endTransaction
INFO: Rolled back transaction for test context [DefaultTestContext@644abb8f testClass = ClientDAOTest, testInstance = com.alexandreesl.handson.dao.test.ClientDAOTest@2adddc06, testMethod = testRemove@ClientDAOTest, testException = [null], mergedContextConfiguration = [MergedContextConfiguration@70325d20 testClass = ClientDAOTest, locations = ‘{}’, classes = ‘{class com.alexandreesl.handson.core.test.AppTestConfiguration}’, contextInitializerClasses = ‘[]’, activeProfiles = ‘{}’, propertySourceLocations = ‘{}’, propertySourceProperties = ‘{}’, contextLoader = ‘org.springframework.test.context.support.DelegatingSmartContextLoader’, parent = [null]]].
Mar 28, 2015 2:29:55 PM org.springframework.test.context.transaction.TransactionContext startTransaction
INFO: Began transaction (1) for test context [DefaultTestContext@644abb8f testClass = ClientDAOTest, testInstance = com.alexandreesl.handson.dao.test.ClientDAOTest@4905c46b, testMethod = testUpdate@ClientDAOTest, testException = [null], mergedContextConfiguration = [MergedContextConfiguration@70325d20 testClass = ClientDAOTest, locations = ‘{}’, classes = ‘{class com.alexandreesl.handson.core.test.AppTestConfiguration}’, contextInitializerClasses = ‘[]’, activeProfiles = ‘{}’, propertySourceLocations = ‘{}’, propertySourceProperties = ‘{}’, contextLoader = ‘org.springframework.test.context.support.DelegatingSmartContextLoader’, parent = [null]]]; transaction manager [org.springframework.orm.jpa.JpaTransactionManager@7c2327fa]; rollback [true]
Mar 28, 2015 2:29:55 PM org.springframework.test.context.transaction.TransactionContext endTransaction
INFO: Rolled back transaction for test context [DefaultTestContext@644abb8f testClass = ClientDAOTest, testInstance = com.alexandreesl.handson.dao.test.ClientDAOTest@4905c46b, testMethod = testUpdate@ClientDAOTest, testException = [null], mergedContextConfiguration = [MergedContextConfiguration@70325d20 testClass = ClientDAOTest, locations = ‘{}’, classes = ‘{class com.alexandreesl.handson.core.test.AppTestConfiguration}’, contextInitializerClasses = ‘[]’, activeProfiles = ‘{}’, propertySourceLocations = ‘{}’, propertySourceProperties = ‘{}’, contextLoader = ‘org.springframework.test.context.support.DelegatingSmartContextLoader’, parent = [null]]].

Now let’s move on to the Service tests, with the help of Mockito.

The class to test the Service tier is very simple, as we can see bellow:

package com.alexandreesl.handson.service.test;

import static org.junit.Assert.assertEquals;

import org.junit.BeforeClass;
import org.junit.Test;
import org.mockito.Mockito;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;

import com.alexandreesl.handson.dao.ClientDAO;
import com.alexandreesl.handson.model.Client;
import com.alexandreesl.handson.service.ClientService;

public class ClientServiceTest {

private static ClientDAO clientDAO;

private static ClientService clientService;

@BeforeClass
public static void beforeClass() {

clientService = new ClientService();

clientDAO = Mockito.mock(ClientDAO.class);

clientService.setClientDAO(clientDAO);

Client client = new Client();
client.setId(0);
client.setName(“Mocked client!”);
client.setPhone(11111111);
client.setSex(“M”);

Mockito.when(clientDAO.find(Mockito.anyLong())).thenReturn(client);

Mockito.doThrow(new RuntimeException(“error on client!”))
.when(clientDAO).delete((Client) Mockito.any());

Mockito.doNothing().when(clientDAO).create((Client) Mockito.any());

Mockito.doAnswer(new Answer<Object>() {
public Object answer(InvocationOnMock invocation) {
Object[] args = invocation.getArguments();

Client client = (Client) args[0];

client.setName(“Mocked client has changed!”);

return client;
}
}).when(clientDAO).update((Client) Mockito.any());

}

@Test
public void testFind() {

Client client = clientService.find(10);

Mockito.verify(clientDAO).find(10);

assertEquals(client.getName(), “Mocked client!”);

}

@Test
public void testInsert() {

Client client = new Client();

client.setId(3);
client.setName(“Celina do Sim”);
client.setPhone(44657688);
client.setSex(“F”);

clientService.create(client);

Mockito.verify(clientDAO).create(client);

}

@Test
public void testUpdate() {

Client client = clientService.find(20);

client.setPhone(12345678);

clientService.update(client);

Mockito.verify(clientDAO).update(client);

assertEquals(client.getName(), “Mocked client has changed!”);

}

@Test(expected = RuntimeException.class)
public void testRemove() {

Client client = clientService.find(2);

clientService.delete(client);

}

}

On this test case, we didn’t need Spring to inject the dependencies, because the only dependency of the Service class is the DAO, which we create as a mock with Mockito on the @BeforeClass event, which executes just once before any test case is executed. On Mockito, we have the concept of Mocks and Spys. With Mocks, we have to define the behavior of the methods that we know our tests will be calling, because the mocks didn’t call the original methods, even if no behavior is declared. If we want to mock just some methods and use the original implementation on the others, we use a Spy instead.

On our tests, we made the following mocks:

  • For the find method, we specify that for any call, it will return a client with the name “Mocked client!”;
  • For the delete method, we throw a RuntimeException, with the message “error on client!”;
  • For the create method, we just receive the argument and do nothing;
  • For the update method, we receive the client passed as argument and change his name for “Mocked client has changed!”;

And that is it. We can also see that our test cases use the verify method from Mockito, that ensures that our mock was called inside the service tier. This check is very useful to ensure for example, that a developer didn’t removed the call of the DAO tier by accident. If we run the JUnit class, we can see that everything runs as a success, proving that our coding was successful.

This is the final structure of the project:

Conclusion

And that concludes our hands-on. With a simple usage, but a powerful usability, test libraries like Mockito and DBUnit are useful tools on the development of a powerful testing tier on our applications, improving and ensuring the quality of our code. Thank you for following me, until next time.

Continue reading

Hands-on Akka: exploring a new model of parallelism in applications

Standard

Welcome, dear reader, to another post from my blog on technology. In this post we will discuss a framework, originally made for the Scala language, but also with a version for Java, which offers a new way of developing parallel applications: Akka.

Traditional model of parallelism: Threads

Traditionally, when we work with parallelism, we use threads, which sometimes need to share resources with each other. In order to ensure the isolation of executions, we begin to enter execution blocks with the synchronized policy. As the system grows, more and more blocks of this nature are being added, occasionally taking us to the condition of deadlocks, where processes are in a state of permanent lock, as a process attempts to access a resource that is already locked by a predecessor process of its execution flow. We can see an example of this situation in the figure below, where three threads “compete” for the use of resources and enter in a deadlock state:

Thinking about this, was developed in 1973 by Carl Hewitt, Peter Bishop and Richard Steiger, one paper called “The Universal Modular Actor Formalism for Artificial Intelligence.” This paper introduced the concept of actors, which we will speak next.

Parallelism model by actors

In the model of parallelism by actors, we have a new concept of development. In this model, all processing must be broken into logical units, called actors, each with its due role and its proper order within a flow. A simple way to understand this model is to imagine a process of real life, where the “actors” are people. Imagine a stream where a person A receives a message to be sent by letter to a person B, in this scenario, we would have the following flow of actions, in simplified form:

  • The person A receives the letter;
  • The person A deliver the letter to the receptionist of the post office;
  • The receptionist organizes the letters and deliver to the postman;
  • The postman is headed for the residence of the person B and give the letter;
  • Person B reads the letter;In this simple example, we see each person being an actor within the stream. An important point to notice in this example, is that all actions are asynchronous among actors: as person A hands the letter to the receptionist, it does not need to wait for the delivery of the letter to terminate his participation in the stream.It is precisely from these concepts that Akka builds its processing: with actors running independent of the flow steps with their interactions with each other occurring asynchronously.

    The diagram below illustrates this model:

PS:In the model above, we can see that the actors are in a kind of hierarchy, where from a root actor, other actors are invoked. This hierarchy is the “location” of the creation of the actors within a flow, where the root actor is created within the main system thread. During the hands-on, we can see more clearly how this hierarchy works.

Hands-on

For this hands-on, we will use Eclipse Luna and Maven 3.0. First, create a simple Maven project – without defined archetype – and put our dependencies in the pom.xml file:

<project xmlns=”http://maven.apache.org/POM/4.0.0&#8243; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance&#8221; xsi:schemaLocation=”http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd”&gt;
<modelVersion>4.0.0</modelVersion>
<groupId>br.com.alexandreesl.handson</groupId>
<artifactId>HandsOnAkka</artifactId>
<version>0.0.1-SNAPSHOT</version>

<dependencies>
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-actor_2.10</artifactId>
<version>2.2.3</version>
</dependency>
</dependencies>

</project>

As we can see above, simply include dependence “akka-actor_2.10” to the project. In Akka, we have the concept of actors servers where we run our processes in Akka model. You can instantiate these servers and invokes them remotely, but in this example, we will start one in a standalone way, in order to maintain simplicity in learning.

To begin, let’s create the main class of the project, where we will start the actors server. The code below accomplishes this task:

package br.com.alexandreesl.handson;

import akka.actor.ActorSystem;

public class ActorServer {

public static void main(String[] args) {

ActorSystem server = ActorSystem.create(“ActorServer”);

}

}

As we can see, it is quite simple to create a actors server, with only one line. An important point in the server creation is that it creates a new thread that keeps the program running indefinitly.

In this example, we will simulate the sending of a letter from a person A to person B, as we talked throughout the post. For this, we will make the call to the first actor of the flow, ie, the person who will receive the letter and take it to the post office to send:

package br.com.alexandreesl.handson;

import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;

public class ActorServer {

public static void main(String[] args) {

ActorSystem server = ActorSystem.create(“ActorServer”);

ActorRef personA = server.actorOf(Props.create(PersonA.class),
“PersonA”);

personA.tell(“Message to be delivered “, ActorRef.noSender());

}

}

In the above code, we create a reference (ActorRef) for the actor personA and pass on the letter to the same. To create an actor, just create a class that extends the UntypedActor class, as we see below.

package br.com.alexandreesl.handson;

import akka.actor.ActorRef;
import akka.actor.Props;
import akka.actor.UntypedActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;

public class PersonA extends UntypedActor {

private ActorRef postRecepcionist;

private LoggingAdapter log = Logging.getLogger(getContext().system(), this);

@Override
public void preStart() throws Exception {
super.preStart();

postRecepcionist = getContext().actorOf(
Props.create(PostReceptionist.class), “PostRecepcionist”);

}

@Override
public void onReceive(Object message) throws Exception {

log.info(“Receiving the letter”);

log.info(“Going to the post office”);

log.info(“Delivering the letter to the post recepcionist”);

postRecepcionist.tell(message, getSelf());

}

}

In the above code, we create the reference to the receptionist of the post office in the preStart event and implement the message passing to the receptionist in the actor´s main method, “OnReceive”. Within the life cycle of the actors in Akka, there are 4 events in which we can insert additional code: preStart, preRestart, postRestart and postStop. Basically, the actors have several “incarnations” (instances) that occur each time the actor ´s main method, “OnReceive” throws an exception, where according to a supervisor policy set, the actor can either be reincarnated (restart ) and finished (stop). Later on we will talk in more detail about the policies, but for now, we can see in the diagram below the life cycle of the actors:

One last point to talk about the code above is the use of Akka´s logging API where we can log some messages representing the processing to be performed by the actor. When we use the Akka API, beyond traditional information log that can be found in Java, is also recorded the actor’s hierarchy within the flow, facilitating further analysis. The remaining codes of the example are analogous to the Actor PersonA and represent only the passage of the message flow hierarchy, so below we list the rest of this first code example:

package br.com.alexandreesl.handson;

import akka.actor.ActorRef;
import akka.actor.Props;
import akka.actor.UntypedActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;

public class PostReceptionist extends UntypedActor {

private ActorRef postMan;

private LoggingAdapter log = Logging.getLogger(getContext().system(), this);

@Override
public void preStart() throws Exception {
super.preStart();

postMan = getContext().actorOf(Props.create(PostMan.class), “PostMan”);

}

@Override
public void onReceive(Object message) throws Exception {

log.info(“Organizing the letters”);

log.info(“Delivering the letters to the Postman”);

postMan.tell(message, getSelf());

}

}

 

package br.com.alexandreesl.handson;

import akka.actor.ActorRef;
import akka.actor.Props;
import akka.actor.UntypedActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;

public class PostMan extends UntypedActor {

private ActorRef personB;

private LoggingAdapter log = Logging.getLogger(getContext().system(), this);

@Override
public void preStart() throws Exception {
super.preStart();

personB = getContext().actorOf(Props.create(PersonB.class), “PersonB”);

}

@Override
public void onReceive(Object message) throws Exception {

log.info(“Go to the address with the letter”);

log.info(“Deliver the letter to personB”);

personB.tell(message, getSelf());

}

}

 

package br.com.alexandreesl.handson;

import akka.actor.UntypedActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;

public class PersonB extends UntypedActor {

private LoggingAdapter log = Logging.getLogger(getContext().system(), this);

@Override
public void onReceive(Object message) throws Exception {

log.info(“Reads the letter”);

}

}

Finally, to run the example, we just select the ActorServer class and run the same as a Java program (run as> Java Application). The log shows the program execution:

[INFO] [12/25/2014 14:12:48.439] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:12:48.439] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:12:48.439] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:12:48.439] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:12:48.439] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:12:48.442] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:12:48.442] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:12:48.442] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter

As we can see above, all messages were processed in the order of the implemented flow. An important point to note is that the processing of messages by the actors is asynchronous, that is, if we put more messages, the actors will not wait for the end of the processing to process the next message. To illustrate, let’s modify the main class and include a message loop:

.

.

.

for (int i = 0; i < 10; i++)
personA.tell(“Message to be delivered ” + i, ActorRef.noSender());

With the above modification, if we execute the program again, we will have the following log:

[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:16:33.584] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:16:33.585] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:16:33.587] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:16:33.587] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:16:33.587] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:16:33.587] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:16:33.588] [ActorServer-akka.actor.default-dispatcher-4] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter

As seen above, executions of the actors interspersed with each other, demonstrating the asynchrony of executions.

Transactions & fault tolerance

The reader may be wondering how the Akka address issues such as transactional code and fault tolerance in cases of executions in which exceptions occur during execution.

For transactional, Akka provides another component called “akka-Transactor”, which uses classes like the Coordinator, where we create code defined as “atomic”, ie, that must be performed in its entirety, or have your changes aborted in failure cases.

To the question of fault tolerance, the Akka provides the concept of supervisors, where an actor can also be a supervisor. When an actor is also a supervisor, it implements a policy that will affect all actors under the supervisor. In our example of hands-on, if we put for example the receptionist as a supervisor, the policy will be applied to both the postman and the personB, which are the next actors in the running. We have in the framework various supervisory models, where actors can either be “reincarnated” or finalized. In addition, you can also define whether the policy will be performed (restart / stop) for all supervised actors, or just for the actor that failed.

To illustrate the use, let’s add the following code snippet in the receptionist actor:

.

.

.

@Override
public SupervisorStrategy supervisorStrategy() {
return new OneForOneStrategy(-1, Duration.Inf(),
new Function<Throwable, Directive>() {
public Directive apply(Throwable t) throws Exception {
return OneForOneStrategy.restart();
}
});
}

In the passage above, we define a policy for all actors below the receptionist, where in case of failure, the actor that failed will be restart.

To simulate a fault, modify the code in personB actor:

package br.com.alexandreesl.handson;

import scala.Option;
import akka.actor.UntypedActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;

public class PersonB extends UntypedActor {

int counter = 0;

private LoggingAdapter log = Logging.getLogger(getContext().system(), this);

@Override
public void preRestart(Throwable reason, Option<Object> message)
throws Exception {

log.info(“THE PERSONB IS BOOTING!”);

super.preRestart(reason, message);
}

@Override
public void onReceive(Object message) throws Exception {

log.info(“Reads the letter”);

if (counter % 2 != 0)
throw new RuntimeException(“ERROR!”);

counter++;

}

}

When we run the example again, we have the following execution log, showing the implementation of the policy:

[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Receiving the letter
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Going to the post office
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-3] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA] Delivering the letter to the post recepcionist
[INFO] [12/25/2014 14:23:54.807] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:23:54.808] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:23:54.808] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:23:54.808] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:23:54.808] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:23:54.816] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:23:54.816] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:23:54.816] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:23:54.816] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist] Organizing the letters
[INFO] [12/25/2014 14:23:54.816] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist] Delivering the letters to the Postman
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:23:54.817] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:23:54.818] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:23:54.818] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[INFO] [12/25/2014 14:23:54.818] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Go to the address with the letter
[INFO] [12/25/2014 14:23:54.818] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan] Deliver the letter to personB
[ERROR] [12/25/2014 14:23:54.823] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] ERROR!
java.lang.RuntimeException: ERROR!
at br.com.alexandreesl.handson.PersonB.onReceive(PersonB.java:29)
at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[INFO] [12/25/2014 14:23:54.823] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] THE PERSONB IS BOOTING!
[INFO] [12/25/2014 14:23:54.868] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:23:54.868] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[ERROR] [12/25/2014 14:23:54.868] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] ERROR!
java.lang.RuntimeException: ERROR!
at br.com.alexandreesl.handson.PersonB.onReceive(PersonB.java:29)
at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[INFO] [12/25/2014 14:23:54.868] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] THE PERSONB IS BOOTING!
[INFO] [12/25/2014 14:23:54.868] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:23:54.868] [ActorServer-akka.actor.default-dispatcher-2] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[ERROR] [12/25/2014 14:23:54.869] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] ERROR!
java.lang.RuntimeException: ERROR!
at br.com.alexandreesl.handson.PersonB.onReceive(PersonB.java:29)
at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[INFO] [12/25/2014 14:23:54.869] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] THE PERSONB IS BOOTING!
[INFO] [12/25/2014 14:23:54.869] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:23:54.869] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[ERROR] [12/25/2014 14:23:54.869] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] ERROR!
java.lang.RuntimeException: ERROR!
at br.com.alexandreesl.handson.PersonB.onReceive(PersonB.java:29)
at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[INFO] [12/25/2014 14:23:54.869] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] THE PERSONB IS BOOTING!
[INFO] [12/25/2014 14:23:54.869] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[INFO] [12/25/2014 14:23:54.869] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] Reads the letter
[ERROR] [12/25/2014 14:23:54.869] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] ERROR!
java.lang.RuntimeException: ERROR!
at br.com.alexandreesl.handson.PersonB.onReceive(PersonB.java:29)
at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[INFO] [12/25/2014 14:23:54.869] [ActorServer-akka.actor.default-dispatcher-5] [akka://ActorServer/user/PersonA/PostRecepcionist/PostMan/PersonB] THE PERSONB IS BOOTING!

I encourage the reader to go deeper in these and other subjects of the framework

Conclusion

And so we conclude our post on the Akka framework. With a very interesting concept and a extensible parallel processing model, the framework is a good option that should be evaluated by all developers and architects who want to explore other ways beyond the traditional thread pool. Many thanks to all who have accompanied me in this post, until next time.

Continue reading

Spring Batch: making massive batch processing on Java

Standard

Welcome, dear reader, to another post from my technology blog. In this post, we discuss a framework that may not be very familiar to everyone, but it is a very powerful feature in the construction of batch applications made in Java: The Spring Batch.

Batch Application: what it is

A batch application, in general, is nothing more than a program whose goal is to make the processing of large amounts of data, on a scheduled basis, usually through programmed trigger mechanisms (scheduling).

Typically, on companies, we see many such programs been built directly into the database layer, using languages such as PL \ SQL, for example. This method has its advantages, but there are several advantages that can draw to build a batch program in a technology like Java. One advantage we get is the ease of application scalation, as a batch built in this language will typically run as a standalone program or within an application server, so you can have your memory, CPU, etc more easily scaled than the alternative of a batch in PL \ SQL. Moreover, the alternative of making a batch on Java offers more opportunities of reuse, as the same logic can be applied to  batch, web, REST, etc.

So, having made our introduction to the subject, let’s proceed and start talking about the framework.

Framework architecture

In the figure below, taken from the framework documentation, can we see the main components that make up the architecture of a Spring Batch job. Let’s see in better detail.

 

As we can see above, when we build a job – a term commonly used to describe a batch program, we will use from now – in the framework, you must implement three types of artifacts: a job script, which consists of an implementation plan with steps, which makes up the job execution, connection settings for the data sources that the job will process such as databases, JMS queues, etc. and of course, classes that implement the processing logic.

When we use the framework for the first time, a step of the setup is to create a set of database tables, whose function is to provide the basis for a repository of jobs. The framework focuses on the concept where you can, through these tables, control the status of different jobs, through the different executions, allowing a restartability mechanism, that is, it allows a job to be restarted from the point at which it stopped in the last run, in case of failure. To achieve this control, the Spring Batch provides the following control structure represented by a set of classes:

JobRunner: Class responsible to make the execution of a job by external request. Has several implementations to provide method invocation call for different modes such as a shell script, for example. Performs the instantiation of a JobLauncher;

JobLocator: Class responsible for getting the configuration information, such as the implementation plan (job script), for a given job passed by parameter. Works in conjunction with the JobRunner;

JobLauncher: Class responsible for managing the start and manage the actual execution of the job, is instantiated by JobRunner;

JobRepository: Facade class that interface the access of the framework classes to the tables of the repository, it is through this class that jobs communicate the progress of its executions, thus ensuring that it could make his restart;

Thanks to this mechanism of control, Spring provides a web application, developed in Java, which allows actions like view execution logs of batches and start / stop / restart jobs through the interface, called Spring Batch Admin. More information about the application can be found at the end of the post.

Now that we have clarified the framework architecture, let’s talk about the main components (classes / interfaces that the developer must implement) that the developer has at his disposal for the construction of the processing logic itself.

Components

Tasklet: Basic unit of a step, can be created for the development of specific actions of the batch, like calling a webservice which data is to be used for all steps of the implementation, for example.

ItemReader: Component used in a structure known as chunk, where we have a data source that is read, processed and written in an iterative fashion, into blocks – chunks – until all the data has been processed. This component is the logic of reading, that read sources such as databases. The framework comes with a set of pre-build readers, but the developer can also develop your own if necessary.

ItemProcessor: Component used in a structure known as chunk, where we have a data source that is read, processed and written in an iterative fashion, into blocks – chunks – until all the data has been processed. This component is the processing logic, which typically consists of the execution of business rules, calls to external resources for enrichment of data, such as web services, among others.

ItemWriter: Component used in a structure known as chunk, where we have a data source that is read, processed and written in an iterative fashion, into blocks – chunks – until all the data has been processed. This component is for the writte logic of the processed data, like with the ItemReaders, the framework also comes with a set of pre-build ItemWriters to write on sources such as databases, but the developer can also develop your own writer, if necessary.

Decider: Component responsible for making use of logic to perform tasks like “go to the step 1 if a value equal to X, if equal to Y go to the step 2, and ends the execution if the value is equal to Z “.

Classifier: Component that can be used in conjunction with other components, such as a ItemWriter and perform classification logic, such as “run ItemWriterA for the item if it has the property X = true, otherwise, execute the ItemWriterB “. IMPORTANT: IN THIS SCENARIO, THE ORDER OF EXECUTION OF THE ITEMS WITHIN THE CHUNK IS MODIFIED, BECAUSE THE FRAMEWORK  MAKES ALL THE CLASSIFICATION OF THE ITEMS FIRST, AND THEN EXECUTE 1 ITEMWRITER AT A TIME!

Split: Component used when you want, at a certain point of execution, a set of steps to run in parallel through multithreading.

About the Java EE 7 Batch specification

Some readers may be familiar with the new API called “Batch”, the JSR-352, which introduces a new batch processing API in Java EE 7 platform, having very similar concepts to Spring Batch, it fills an important gap in the implementation of reference of the Java technology. More than a philosophical question, some attention points should be considered before you choose to use one or the other framework, such as the requirement of a Java EE container (server) for its implementation, the lack of support for the use via jdbc in access to the databases, and the absence of support for reading outsourced properties in files, which the Spring Batch can use through components called PropertyPlaceHolders. In the links at the end of the post, you can read an article detailing the differences of the two in more depth.

Conclusion

Unfortunately, you can not detail, in a single post, all the power of the framework. Several things were left out, such as support for event listeners in the execution of jobs, error treatments allowing certain exceptions have retries policies or being “ignored” (retry, skip), among other features. I hope, however, be transmitted to the reader a good initial view of the framework, sharpening his curiosity. Massive data processing has always been, and always will be a major challenge for companies, and our mission, as IT professionals, is the constant learning of the best resources we have available. Thank you for your attention, and until next time.

Continue reading