In this article we aim to clarify why sometimes it’s important to map two different classes and how it’s possible to do it by building the simplest possible mapping manager step-by-step. Design choices / constraints often produce a considerable amount of mechanical work which we will try to minimize by automatically mapping Java objects.
By “mapping objects” we mean copy one object state to another object. Sometimes the mapping is transparent, considering that both objects have the same attributes, but often it happens between two objects with different attributes. Those who are familiar with Web API development probably already faced a situation where your resource representation is similar but not identical to the domain model or even when you don’t want to expose your domain objects to any other layer within your system. In these cases we generally create a component to abstract this mapping task, which also allow us to reuse it in other parts of the system.
An advantage of having a mapping component (e.g. to map your domain to DTO and vice versa) is not as apparent when you are only supporting a single mapping, but as the number of mappings increases, having that code isolated from the domain helps to keep the domain simpler and leaner. You won’t be cluttering your domain with a lot of extra weight.
To clarify the mapping idea, let’s start creating two domain classes (Customer and Address) and one DTO class (CustomerDTO).
Before I talk about the issues on my Notifications project, let’s see how use it.
That’s simplest example of notification:
var notification = new Notification("yeah, baby, yeah!");
Try it yourself
Pretty cool, eh?
In the example above we are only printing the attribute title of the notification. There are other attributes we can declare to have a better experience with our notification and, yay, those are almost working in the same way at the main browsers (except for IE family, of course. But we are talking about real browsers, aren’t we? But don’t worry, we’ll create a way to give a fallback to them).
Recently, I faced an unusual requirement during the implementation of a project. My task was to implement a web crawler to index the content of a few websites and save those data into an Elasticsearch index for further analysis. The pitfall decision in this case lay in the fact that I had no strong reason to keep the extracted data in anywhere else, once that all user interaction with this data would be done using an web application that connects directly to Elasticsearch. But, if the Elasticsearch index mapping changes at anytime in the future, I would be forced to re-index part or all of the data, which means extract the same data from web sites again.
Adopting a relational database to address this need seemed to me an unjustified implementation effort. It would drastically increase the time, cost and complexity to implement and maintain the project, just to avoid a future risk of changes in my index mapping. Deal with database modeling, choose a persistence framework, implement extra tests, … I feel tired just to think about it. So, talking with my friend Paulo about this problem, he told me about the elasticsearch-river-mongodb project, an Elasticsearch plugin that propagates changes data from a MongoDB collection to an Elasticsearch index.
Use MongoDB seemed to be a good idea. The data extracted from website are not well structured and it is highly probable to suffer frequently changes. A schema-free / document oriented database fits well in this case, once that it’s flexible enough to accommodate changes on data structure with a minimum impact.
But, How to integrate Elasticsearch with MongoDB?
Despite the fact that elasticsearch-river-mongodb project seems to be awesome, offering filter and transformation capabilities, it is deprecated, having Elasticsearch 1.7.3 and MongoDB 3.0.0 as most recently supported versions. You can find more information about the deprecation decision on the article “Deprecating Rivers”.
It is a shame, but all is not lost. The MongoDB team offers mongodb-connector project which creates a pipeline to target systems and has a document manager for Elasticsearch. Great! And I’m so happy with the final result of this solution that I want to share my experience with you. My intention along of this post is to show what I found useful, what was tricky and what limitations I found during the implementation of this solution.
Uma característica meio desconhecida do MySQL, também presente em outros bancos de dados como o Oracle, é a de exportar o resultado de consultas diretamente para arquivos. Esta particularidade do MySQL é bem interessante quando precisamos fazer alguma exportação pontual de dados, que posteriormente possam ser importados em ferramentas como o Excel.
Mas vamos ao que interessa, para exportamos o resultset de uma consulta vamos utilizar as instruções “INTO OUTFILE”, “FIELDS TERMINATED BY”, “ENCLOSED BY” e “LINES TERMINATED BY”.
Segue abaixo um exemplo de como usar as instruções:
mysql> SELECT * FROM produto.usuarios
-> INTO OUTFILE '/tmp/usuarios.csv'
-> FIELDS TERMINATED BY ','
-> ENCLOSED BY '"'
-> LINES TERMINATED BY '\n';
Query OK, 5 rows affected (0.00 sec)