7 Useful Examples of the Java 8 Stream API

7 Useful Examples Java 8 Stream API

You probably already heard about the Java 8 Stream API (If not, you can find more information about it here). It’s basically an abstraction that allows us to process data in a simple and declarative way. Besides that, streams can leverage multi-core architectures without you having to write a single line of multi-thread code.

In this article we won’t describe each of the available methods on the Java 8 Stream API. Instead, we will focus on practical examples of how to consume this new API.

First of all, let’s present our Athlete model class:

Now, let’s initialize a List of Athletes with 6 objects…

Based on the previously presented List of Athletes, let’s now see how to easily manipulate this data and extract the information we want.

1. Getting the quantity of female athletes (filter + count)

2. Retrieving the list of American athlete names (filter + map + collect)

Continue reading »

Automatically Mapping Java Objects

Automatically Mapping Java ObjectsIn this article we aim to clarify why sometimes it’s important to map two different classes and how it’s possible to do it by building the simplest possible mapping manager step-by-step. Design choices / constraints often produce a considerable amount of mechanical work which we will try to minimize by automatically mapping Java objects.

By “mapping objects” we mean copy one object state to another object. Sometimes the mapping is transparent, considering that both objects have the same attributes, but often it happens between two objects with different attributes. Those who are familiar with Web API development probably already faced a situation where your resource representation is similar but not identical to the domain model or even when you don’t want to expose your domain objects to any other layer within your system. In these cases we generally create a component to abstract this mapping task, which also allow us to reuse it in other parts of the system.

An advantage of having a mapping component (e.g. to map your domain to DTO and vice versa) is not as apparent when you are only supporting a single mapping, but as the number of mappings increases, having that code isolated from the domain helps to keep the domain simpler and leaner. You won’t be cluttering your domain with a lot of extra weight.

Hands On

To clarify the mapping idea, let’s start creating two domain classes (Customer and Address) and one DTO class (CustomerDTO).

Continue reading »

Monitoring Java applications with ELK

 Monitoring Java applications with ELKMonitoring Java applications with ELK (Elasticsearch, Logstash and Kibana) will show you step-by-step how to properly centralize your Java application logs by sending all its log messages to a remote ELK server. Using this approach you can have all information generated by Java applications, running along multiple servers, in a centralized place. This way you can easily create dashboards and start to analyze your applications in a more high level and practical manner.

You know it’s sad but true

Let’s think about a very common scenario in many companies: many developed Java applications running across multiple application servers, each application performing many operations per day and logging thousands and thousands of lines that generally nobody checks unless some problem occurs along the applications. It sounds familiar, doesn’t it? The biggest issue here is that, unless we are debugging a production problem, the logs have no value at all. They are not telling us anything about aspects we must care about, such as business process performance. There’s gold within these logs!

How about building a better scenario?

Think about the sad story I just told you. Now imagine all your Java applications producing the same amount of logs but then sending them to a centralized place where all received data is accordingly analyzed, modified and finally presented in a real accessible way. Would you like to know how many payments did your system realize in the last minute, day or week? What about how many times a specific exception was thrown? The possibilities are infinite.

Let’s see how to achieve this desired scenario using the ELK stack.

Proposed solution

Our proposed solution will combine one Java Application configured to use Logback (the successor of the famous Log4J), one specialized Log Appender class “LogstashTcpSocketAppender” (provided by the Logstash team) and one ELK server.

Tutorial – Monitoring Java applications with ELK

Step 1 – Setup the ELK stack

We have two detailed articles about how to setup the ELK stack on Ubuntu and Windows, please check them following the links bellow:

Step 2 – Configure Logstash to receive our logs

Within the ELK server, create a new configuration file, /etc/logstash/conf.d/logback-listener.conf for Ubuntu 16.04 and D:\ELK\logstash-2.3.4\conf.d\logback-listener.conf for Windows, inserting the following content: Continue reading »

How to install ELK on Windows

install elk windowsIn this tutorial we will provide you detailed instructions on how to install ELK (Elasticsearch, Logstash and Kibana) on Windows.

A short introduction about the ELK stack

The ELK is a powerful and versatile stack for collecting, analyzing and exploring data in real time.

The components of the ELK stack are:

Elasticsearch – Search and analyze data in real time.

Logstash – Collect, enrich, and transport data.

Kibana – Explore and visualize data.

Tutorial – How to install ELK on Windows

Step 1 – Install Java 8

This is a mandatory step once both Elasticsearch and Logstash require Java. We are recommending the Java 8 because so far is the most recent stable version.

While a JRE can be used for the Elasticsearch service, due to its use of a client VM (as oppose to a server JVM which offers better performance for long-running applications) its usage is discouraged and a warning will be issued.

Download JDK installer

Access the Java download page (http://www.oracle.com/technetwork/pt/java/javase/downloads/jdk8-downloads-2133151.html), click on “Accept License Agreement” and then select the option “Windows x64”. So far the newest version is jdk-8u101-windows-x64.exe.

Install JDK

Just execute the JDK installer and follow the wizard instructions.

Step 2 – Create a folder to keep the ELK components grouped

Create a directory “D:\ELK”. This directory will be used to keep all ELK components grouped in the same folder.

Step 3 – Download and configure Elasticsearch 2.3.5

Download Elasticsearch

Download the Elasticsearch ZIPPED package from here: https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/zip/elasticsearch/2.3.5/elasticsearch-2.3.5.zip

Extract its content to the “D:\ELK” folder. The result will be “D:\ELK\elasticsearch-2.3.5”.

Continue reading »

How to install ELK on Ubuntu 16.04

install ELKIn this tutorial we will provide you with detailed instructions on how to install ELK (Elasticsearch, Logstash and Kibana) on Ubuntu 16.04.

A short introduction about the ELK stack

The ELK is a powerful and versatile stack for collect, analyze and explore data in real time. The components of an ELK are:

Elasticsearch – Search and analyze data in real time.

Logstash – Collect, enrich, and transport data.

Kibana – Explore and visualize data.


Step 1 – Install Java 8

This is a mandatory step once both Elasticsearch and Logstash require Java. We are recommending the Java 8 because so far is the most recent stable version.

First of all we need to add the Oracle Java PPA:

Then just update the apt package database and install the package oracle-java8-installer:

Continue reading »

1 2 3 8