Handle R2dbc in Spring Projects

21
minutes
Mis à jour le
23/12/2020

Share this post

In this article, we will deepen what is Reactive programming, the place R2DBC specification takes in it and how developers can handle it in spring projects

#
R2DBC
#
Spring Reactive

Reactive programming is a phrase that seems to be more and more present in web development. New tools and frameworks are appearing in the developing landscape.

We need to follow those changes, to be able to provide the best technical solutions to our customers. But what is reactive programming?

According to Wikipedia,

In computing, reactive programming is a declarative programming paradigm concerned with data streams and the propagation of change

And here, you still don't know what is reactive programming and why it should be taken into account when you consider developing a new web application. Let's try to make things short and clear so that you can go further and read the rest of the article with everything needed.

Let's go into a bit more detail. Each component of your application is built on the same basis :

- Receiving some input data from a calling component
- Processing the data (maybe by calling some other components) and creating output data.
- Returning the output data

In Reactive programming, the caller components don't only send input data to the working component but also subscribe to the returning flow of data. By doing so, they are not waiting for the output data. Instead, they will be warned when those data are available and then they will process it.

You can find a great article here that presents the Reactive programming principles as well as how Project Reactor deals with it in the Java ecosystem. You will find here some very useful schemas that will help with your understanding.

Reactive programming is well known by front-end developers because modern frameworks like Angular or React are based on it. Concerning Java, it's coming step by step, with for example the arrival of Spring Web Flux and also, the recent R2DBC.


What is R2DBC?

  • Definition

From the R2BDC website:

The Reactive Relational Database Connectivity (R2DBC) project brings reactive programming APIs to relational databases.

There are three main points to understand:

- R2DBC is a specification that provides an interface. Vendors should implement it to provide access to the different databases (PostgreSQL, MySQL, ...).

- It's founded on the Reactive Streams Specification which is adopted by some major parties of the Java ecosystem (such as Vert.X, MongoDB driver, project Reactor)

- Its purpose is to provide a non-blocking alternative to the existing  Relational Database drivers specification such as JDBC.

 

  • The place of R2DBC in a layered architecture

Let's have a look on a layered architecture :

- The Controller Layer handles requests from the outside, does some verification, and dispatches the requests to the service layer.
- The Service Layer takes the input from the controller and then applies some computations based on business specifications. The system intelligence is here.
- To make this computation possible, the service layer has to lean on a Data Access Layer, that will handle the access to the data source.

Using this architecture, every time a layer calls another one, you can  use a reactive process. For example, when you are using Spring WebFlux, the controller layer is automatically set up as a reactive component. 

R2DBC is a way to bring reactivity to the interface between the Data Access Layer and the Database in the case of relational databases. As a consequence, each time this layer is  calling the database, it will release the working thread and wait until the database warns it that the result is ready to be processed.

 

  • The advantages of R2DBC comparing to JDBC 

The advantages of R2DBC over JDBC are the same as the advantages of reactive programming over blocking programming. it has better performance when you are facing a high concurrency situation. A quick look at this article comparing performances of different applications at both low and high concurrencies is enough to see how impressive the improvement is, especially concerning the usage of memory per requests.

However, at low concurrencies, the advantages of R2DBC are fading out and its drawbacks should be taken into consideration. Those drawbacks are due to the freshness of the specification. Some of them are interesting for architects, before deciding whether or not to use it: 

- Lack of feedback and experience from developers concerning a new specification and its implementations.

- Missing features that prevent the specification (and its implementations) to be fully ready for production. For example, the handling of stored procedures is planned on the release 0.9, while currently, we are still on version 0.8.

- Existing competitors like Quarkus with Eclipse Vert.X. It can prevent R2DBC from becoming the future standard.

- Java future specification can also bring some changes to the game. For example, in this article, we can see that the project Loom of Java aims to bring reactivity to blocking APIs (such as JDBC).

We can still be reassured by the fact that Spring has decided to create a full module handling this specification: Spring Data R2DBC. The fact that a major party of the Java ecosystem is putting effort into handling R2DBC is why that new technology is worth considering.

Another drawback may be more interesting for developers, before using R2DBC.

- Lack of features provided by some ORM frameworks (like Hibernate which is based on JDBC). Here is what we can read at the GitHub repository of spring-data-r2dbc

Spring Data R2DBC aims at being conceptually easy. In order to achieve this it does NOT offer caching, lazy loading, write behind or many other features of ORM frameworks. This makes Spring Data R2DBC a simple, limited, opinionated object mapper.

And that's why I wanted to deepen the use of Spring Data R2DBC in a POC, to have a first idea of how easy it is to use, and how hard it is to find workarounds to ORM features that currently do not exist.


How to use R2DBC in a Spring project ?

You can find the sources of the POC I created on this GitHub repository.

  • Configure a project with R2DBC

To configure a project with Spring, you can use Spring Initializr. You choose the modules you want to work with and then your project is generated with all the needed dependencies. You can still do it manually, all the needed pieces of information are provided below.

To be able to work with Spring Data R2DBC, I added the following dependencies. I decided to use PostgreSQL as my database because it is one of the most used relational databases. You can find the list of available drivers for several databases here.


<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-r2dbc</artifactId>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-postgresql</artifactId>
</dependency>

 

And you need to add a configuration class where you declare the connection factory that will be used by spring data to perform requests using R2DBC specification. This configuration depends on the vendor driver you have chosen for your project. In the case of PostgreSQL, here is an example:


@Configuration
@EnableR2dbcRepositories
public class PostgresConfig extends AbstractR2dbcConfiguration {

@Override
@Bean
public ConnectionFactory connectionFactory() {
return new PostgresqlConnectionFactory(
PostgresqlConnectionConfiguration.builder()
.host("localhost")
.port(5433)
.username("postgres")
.password("admin")
.database("mydb")
.build());
}
}

 

  • Handle entities without relations

In this part, we will concentrate on the mapping: one java model class to one table.

1. Create the Data Model

Let's create a table, with the following columns:

CREATE TABLE person
(
id uuid NOT NULL DEFAULT uuid_generate_v4(),
name character varying(255) COLLATE pg_catalog."default" NOT NULL,
street character varying(255) COLLATE pg_catalog."default" NOT NULL,
zip_code character varying(255) COLLATE pg_catalog."default" NOT NULL,
city character varying(255) COLLATE pg_catalog."default" NOT NULL,
CONSTRAINT person_pkey PRIMARY KEY (id)

This table can be mapped with Spring Data in this way:

public class Person {

@Id
UUID id;

String name;
String street;
String zipCode;
String city;

}

In that case, the mapping will be done between the attributes and the columns.

The @Id annotation is one way to specify which column is the primary key. If that attribute is NULL, then Spring will create a new line in the table when saving the object. If that attribute is not NULL, then Spring will try to update the existing attribute. You can see here other ways to make the distinction between new entities and entities to be updated.

One thing you need to notice: there is no annotation like the JPA @GeneratedValue. You need to configure the database properly because it will handle the automatic generation of primary keys.

 

2. Accessing database

To be able to access the database, Spring provides us with an interface: R2dbcRepository. You can create your  repository interface extending it :

@Repository
public interface PersonRepository extends R2dbcRepository<Person, UUID> {
}

This interface provides usable CRUD methods, you can find the list on the documentation.

 

3. Use converters to do complex mappings.

What else is available with Spring Data R2DBC?

If some attributes can be gathered in a nested object, then you can change your Java model, and do something like this:


public class Person {

@Id
UUID id;

String name;
Address address;

public static class Address {

String street;
String zipCode;
String city;
}

}

And then, using the same repository as before, you will need to create two converters:

- One Reading Converter that allows Spring to know how to map data from the database to Java model.


@ReadingConverter
public class PersonReadConverter implements Converter<Row, Person>{

    @Override
    public Person convert(Row source) {
        Address address = Address.builder()
            .city(source.get("city", String.class))
            .zipCode(source.get("zip_code", String.class))
            .street(source.get("street", String.class))
            .build();

        return Person.builder()
             .address(address)
             .name(source.get("name", String.class))
             .id(source.get("id", UUID.class))
             .build();
     }
}

- One Writing Converter that allows Spring to know how to map data from Java model to the database.


@WritingConverter
public class PersonWriteConverter implements Converter<Person, OutboundRow> {

@Override
public OutboundRow convert(Person person) {
OutboundRow row = new OutboundRow();
if(person.getId() != null) {
row.put("id", Parameter.from(person.getId()));
}
row.put("city", Parameter.from(person.getAddress().getCity()));
row.put("zip_code", Parameter.from(person.getAddress().getZipCode()));
row.put("street", Parameter.from(person.getAddress().getStreet()));
row.put("name", Parameter.from(person.getName()));
return row;
}
}

These converters will be used by Spring when calls to the repository are made. They need to be declared in the configuration class.


@Override
protected List<Object> getCustomConverters() {
List<Object> converterList = new ArrayList<>();
converterList.add(new PersonReadConverter());
converterList.add(new PersonWriteConverter());
return converterList;
}

 

4. Projections

You can declare a projection like this:


public interface PersonSummary {

String getName();
AddressSummary getAddress();

interface AddressSummary {
String getCity();
}
}

Then, you can complete the repository  thanks to the @Query annotation which allows you to write SQL queries.


public interface PersonRepository extends R2dbcRepository<Person, UUID> {

@Query("Select * from Person")
Flux<PersonSummary> findAllSummary();

}

By doing so, the request method will only map the necessary attributes and not the complete object.

 

  • Handle relationships between entities

That part is the pain point of R2DBC and the spring handling of it. Let's remember that Spring Data R2DBC is not an ORM, so it does not natively handle the relationships between entities. We will have to find workarounds and to do things a little bit manually.

For the good of this article, I asked the Spring dev team ( here the link to the question)  if they have planned to work on handling these relationships, here is the answer :

Addressing relationships on the read side is one of the most demanded topics. However, we require a proper approach to fetch all relations within a single query as we do not plan to introduce the N+1 problem. 

So we need to handle things manually for now. Here are some solutions I have found. Let's change our use case and create some tables in the database:


CREATE TABLE author
(
id uuid NOT NULL,
name character varying(255) COLLATE pg_catalog."default" NOT NULL,
CONSTRAINT author_pkey PRIMARY KEY (id)
)

CREATE TABLE book
(
id uuid NOT NULL,
title character varying(255) COLLATE pg_catalog."default" NOT NULL,
author uuid NOT NULL,
date_of_parution timestamp without time zone NOT NULL,
CONSTRAINT book_pkey PRIMARY KEY (id),
CONSTRAINT "book_to_author_FK" FOREIGN KEY (author)
REFERENCES public.author (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
)

As you can see, there is a relation between Authors and Books data.

 

1- Handling One to One or Many to One relationships

Here, we will handle the Many to One relationship, but One To One is the same idea. When you do the SQL join, the number of results will be the same as the number of entities in Java. You can manage things by using a Reading Converter, like that one: 


@ReadingConverter
public class BookReadConverter implements Converter<Row, Book> {

@Override
public Book convert(Row source) {
Author author = Author.builder()
.name(source.get("authorName", String.class))
.id(source.get("authorId", UUID.class))
.build();

return Book.builder()
.id(source.get("id", UUID.class))
.author(author)
.title(source.get("title", String.class))
.dateOfParution(source.get("date_of_parution", LocalDate.class))
.build();
}
}

Don't forget to declare it on the configuration class, and then you can complete the Book Repository by doing a join : 


@Repository
public interface BookRepository extends R2dbcRepository<Book, UUID> {

@Query("select book.*, author.id as authorId, author.name as authorName from Book book join Author author on author.id = book.author ")
public Flux<Book> findAll();

}

Then, the Many To One relation is done for the reading part. For the writing part, you still have to save one entity before the other one, you can't use a Writing Converter for that.

 

2- Handling One to Many relationships

Handling One To Many relationships is a little bit more tricky because when you perform the SQL join query, you will have more results than the number of entities you want to map at the end. So you need to aggregate some data and then you are facing one of the issues Spring Data developers are also facing: you are introducing a blocking process in a reactive programming paradigm which should be, by definition, non-blocking.

If you still want to process this kind of join, then you can try to do something like this, that will order results, and aggregate them.

@Repository
@RequiredArgsConstructor
public class AuthorCustomRepository {

private final DatabaseClient databaseClient;

public Flux<Author> findAll() {
return databaseClient.sql(
"SELECT "
+ " book.id as bookId, book.title as bookName, book.date_of_parution as dateOfParution, "
+ " author.id as authorId, author.name as authorName "
+ "FROM author "
+ "JOIN book on author.id = book.author "
+ "ORDER BY authorId").fetch()
.all()
.bufferUntilChanged( result -> result.get("authorId"))
.map(list -> {
AuthorBuilder author = Author.builder();
author.name(String.valueOf(list.get(0).get("authorName")));
author.id((UUID) list.get(0).get("authorId"));

author.books(
list.stream()
.map(map -> {
return Book.builder()
.title((String) map.get("bookName"))
.id((UUID) map.get( "bookId"))
.dateOfParution((LocalDate) map.get("dateOfParution"))
.build();
})
.collect(Collectors.toList())
);
return author.build();
});
}
}

What is interesting, is the use of bufferUntilChanged method that will push some aggregated data into the return Flux as soon as one author object is completed.

 

  • Combine JDBC and R2DBC ?

For the POC, I wanted to use one database version control tool such as Liquibase to automatically create and maintain the database at the same time as the corresponding application code. That kind of tool is useful to make database evolutions. But existing tools are based on JDBC.

Moreover, as we have seen, R2DBC asks the developers to do more manual actions to handle the ORM of the application compared to JPA implementations.

So the question is: is it possible to combine both approaches during the development of the application? And the answer is: Lava antipattern!
In our case, this lava antipattern comes from the co-existence of different dependencies that allow the developer to do the same thing in the same application. That leads to misconception and pieces of code that are difficult to read. That should be avoided and you have to choose whether your project depends on R2DBC or JDBC.

If you want to use tools like Liquibase, it is recommended that you separate the building part of the project where the database model evolution is handled by this tool and the running part of the application based on R2DBC.


Conclusion

After reading this article, you understand what R2DBC is and the place it takes into the increasingly popular reactive programming paradigm. 

You understand that it brings an advantage on performances in high concurrency situtations, but you are also aware that for the moment, using it comes with some more manual actions to be done by the development team, and you have some first ideas of how you can do things.

You can check out my repository to have all the given examples and to deepen your knowledge of R2DBC.

For now, I think that Spring Data R2DBC is not completely ready for every production projects.

  • Projects that do not face high concurrencies issues do not need to use it
  • Some useful features are still not implemented (like the improving of relationships handling), but they are in the team roadmap, even if for now, no clear release date has been defined. Spring teams are completely aware of the developers need and are working on it.

Still, for some special cases, and if the dev team is ready to do more manual things, the performance improvements that brings R2DBC is something that is worth considering. The fact that Spring is working on it supports proves that it is a technology that will bring us some helps in the future.

Some useful links to pursue your deepening of R2DBC.