Introduction to Spock Framework

introduction spock framework

What is Spock Framework anyway?

Spock Framework is an open source testing and specification framework for Java and Groovy applications. It lets you write concise, expressive tests, Behaviour Driven Development (BDD) like form, what makes your test more clear. Spock is compatible with most IDEs, build tools, and continuous integration servers. Spock is inspired from JUnit, jMock, RSpec, Groovy, Scala.

Understanding How the Spock Framework Works

Suppose we have a Publisher class that sends messages to its Subscribers:

And the respective unit test class:

In Spock Framework we don’t have tests, we have specifications. A specification is represented as a Groovy class that extends from “spock.lang.Specification”, which is actually a JUnit class.

Continue reading »

Monitoring Java applications with ELK

 Monitoring Java applications with ELKMonitoring Java applications with ELK (Elasticsearch, Logstash and Kibana) will show you step-by-step how to properly centralize your Java application logs by sending all its log messages to a remote ELK server. Using this approach you can have all information generated by Java applications, running along multiple servers, in a centralized place. This way you can easily create dashboards and start to analyze your applications in a more high level and practical manner.

You know it’s sad but true

Let’s think about a very common scenario in many companies: many developed Java applications running across multiple application servers, each application performing many operations per day and logging thousands and thousands of lines that generally nobody checks unless some problem occurs along the applications. It sounds familiar, doesn’t it? The biggest issue here is that, unless we are debugging a production problem, the logs have no value at all. They are not telling us anything about aspects we must care about, such as business process performance. There’s gold within these logs!

How about building a better scenario?

Think about the sad story I just told you. Now imagine all your Java applications producing the same amount of logs but then sending them to a centralized place where all received data is accordingly analyzed, modified and finally presented in a real accessible way. Would you like to know how many payments did your system realize in the last minute, day or week? What about how many times a specific exception was thrown? The possibilities are infinite.

Let’s see how to achieve this desired scenario using the ELK stack.

Proposed solution

Our proposed solution will combine one Java Application configured to use Logback (the successor of the famous Log4J), one specialized Log Appender class “LogstashTcpSocketAppender” (provided by the Logstash team) and one ELK server.

Tutorial – Monitoring Java applications with ELK

Step 1 – Setup the ELK stack

We have two detailed articles about how to setup the ELK stack on Ubuntu and Windows, please check them following the links bellow:

Step 2 – Configure Logstash to receive our logs

Within the ELK server, create a new configuration file, /etc/logstash/conf.d/logback-listener.conf for Ubuntu 16.04 and D:\ELK\logstash-2.3.4\conf.d\logback-listener.conf for Windows, inserting the following content: Continue reading »

How to install ELK on Windows

install elk windowsIn this tutorial we will provide you detailed instructions on how to install ELK (Elasticsearch, Logstash and Kibana) on Windows.

A short introduction about the ELK stack

The ELK is a powerful and versatile stack for collecting, analyzing and exploring data in real time.

The components of the ELK stack are:

Elasticsearch – Search and analyze data in real time.

Logstash – Collect, enrich, and transport data.

Kibana – Explore and visualize data.

Tutorial – How to install ELK on Windows

Step 1 – Install Java 8

This is a mandatory step once both Elasticsearch and Logstash require Java. We are recommending the Java 8 because so far is the most recent stable version.

While a JRE can be used for the Elasticsearch service, due to its use of a client VM (as oppose to a server JVM which offers better performance for long-running applications) its usage is discouraged and a warning will be issued.

Download JDK installer

Access the Java download page (http://www.oracle.com/technetwork/pt/java/javase/downloads/jdk8-downloads-2133151.html), click on “Accept License Agreement” and then select the option “Windows x64”. So far the newest version is jdk-8u101-windows-x64.exe.

Install JDK

Just execute the JDK installer and follow the wizard instructions.

Step 2 – Create a folder to keep the ELK components grouped

Create a directory “D:\ELK”. This directory will be used to keep all ELK components grouped in the same folder.

Step 3 – Download and configure Elasticsearch 2.3.5

Download Elasticsearch

Download the Elasticsearch ZIPPED package from here: https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/zip/elasticsearch/2.3.5/elasticsearch-2.3.5.zip

Extract its content to the “D:\ELK” folder. The result will be “D:\ELK\elasticsearch-2.3.5”.

Continue reading »

How to install ELK on Ubuntu 16.04

install ELKIn this tutorial we will provide you with detailed instructions on how to install ELK (Elasticsearch, Logstash and Kibana) on Ubuntu 16.04.

A short introduction about the ELK stack

The ELK is a powerful and versatile stack for collect, analyze and explore data in real time. The components of an ELK are:

Elasticsearch – Search and analyze data in real time.

Logstash – Collect, enrich, and transport data.

Kibana – Explore and visualize data.

Tutorial

Step 1 – Install Java 8

This is a mandatory step once both Elasticsearch and Logstash require Java. We are recommending the Java 8 because so far is the most recent stable version.

First of all we need to add the Oracle Java PPA:

Then just update the apt package database and install the package oracle-java8-installer:

Continue reading »

How to load nested JPA entities in a single query

jpa-fetch-joinsHave you ever realized that many times when accessing a managed entity’s attribute it’s common to see JPA performing many subsequent queries in the database in order to lazy load information? Well, it’s a good default behavior and works in the majority of the cases but still sometimes we need to modify this behavior on an ad hoc basis or per use case scenario. For these specific situations we can use a handy feature from JPA known as Fetch Joins. In a very abstract way we can say that the Fetch Joins allow us to dynamically define which relations we want to eagerly load when making a query. In this article we are going to retrieve nested JPA entities in a single query avoiding many database calls.

Example

Let’s consider the following example:

Our system has two entities: Customer and Address. The relationship between them is a bidirectional one-to-many from Customer to Address, which means that a Customer can have none or multiple instances of Address.  Continue reading »

1 2 3 4 9