Git Product home page Git Product logo

blaze-persistence's Introduction

Build Status

Maven Central Zulip Chat

Javadoc - Core Javadoc - Entity-View Javadoc - JPA-Criteria

Blaze-Persistence

Blaze-Persistence is a rich Criteria API for JPA providers.

What is it?

Blaze-Persistence is a rich Criteria API for JPA providers that aims to be better than all the other Criteria APIs available. It provides a fluent API for building queries and removes common restrictions encountered when working with JPA directly. It offers rich pagination support and also supports keyset pagination.

The Entity-View module can be used to create views for JPA entites. You can roughly imagine an entity view is to an entity, what a RDBMS view is to a table.

The JPA-Criteria module implements the Criteria API of JPA but is backed by the Blaze-Persistence Core API so you can get a query builder out of your CriteriaQuery objects.

With Spring Data or DeltaSpike Data integrations you can make use of Blaze-Persistence easily in your existing repositories.

Features

Blaze-Persistence is not only a Criteria API that allows to build queries easier, but it also comes with a lot of features that are normally not supported by JPA providers.

Here is a rough overview of new features that are introduced by Blaze-Persistence on top of the JPA model

  • Use CTEs and recursive CTEs
  • Use modification CTEs aka DML in CTEs
  • Make use of the RETURNING clause from DML statements
  • Use the VALUES clause for reporting queries and soon make use of table generating functions
  • Create queries that use SET operations like UNION, EXCEPT and INTERSECT
  • Manage entity collections via DML statements to avoid reading them in memory
  • Define functions similar to Hibernates SQLFunction in a JPA provider agnostic way
  • Use many built-in functions like GROUP_CONCAT, date extraction, date arithmetic and many more
  • Easy pagination and simple API to make use of keyset pagination

In addition to that, Blaze-Persistence also works around some JPA provider issues in a transparent way.

How to use it?

Blaze-Persistence is split up into different modules. We recommend that you define a version property in your parent pom that you can use for all artifacts. Modules are all released in one batch so you can safely increment just that property.

<properties>
    <blaze-persistence.version>1.6.12</blaze-persistence.version>
</properties>

Alternatively you can also use our BOM in the dependencyManagement section.

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>com.blazebit</groupId>
            <artifactId>blaze-persistence-bom</artifactId>
            <version>${blaze-persistence.version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>    
    </dependencies>
</dependencyManagement>

Quickstart

If you want a sample application with everything setup where you can poke around and try out things, just go with our archetypes!

Core-only archetype:

mvn archetype:generate "-DarchetypeGroupId=com.blazebit" "-DarchetypeArtifactId=blaze-persistence-archetype-core-sample" "-DarchetypeVersion=1.6.12"

Entity view archetype:

mvn archetype:generate "-DarchetypeGroupId=com.blazebit" "-DarchetypeArtifactId=blaze-persistence-archetype-entity-view-sample" "-DarchetypeVersion=1.6.12"

Spring-Data archetype:

mvn archetype:generate "-DarchetypeGroupId=com.blazebit" "-DarchetypeArtifactId=blaze-persistence-archetype-spring-data-sample" "-DarchetypeVersion=1.6.12"

Spring-Boot archetype:

mvn archetype:generate "-DarchetypeGroupId=com.blazebit" "-DarchetypeArtifactId=blaze-persistence-archetype-spring-boot-sample" "-DarchetypeVersion=1.6.12"

DeltaSpike Data archetype:

mvn archetype:generate "-DarchetypeGroupId=com.blazebit" "-DarchetypeArtifactId=blaze-persistence-archetype-deltaspike-data-sample" "-DarchetypeVersion=1.6.12"

Java EE archetype:

mvn archetype:generate "-DarchetypeGroupId=com.blazebit" "-DarchetypeArtifactId=blaze-persistence-archetype-java-ee-sample" "-DarchetypeVersion=1.6.12"

Core-only Jakarta archetype:

mvn archetype:generate "-DarchetypeGroupId=com.blazebit" "-DarchetypeArtifactId=blaze-persistence-archetype-core-sample-jakarta" "-DarchetypeVersion=1.6.12"

Entity view Jakarta archetype:

mvn archetype:generate "-DarchetypeGroupId=com.blazebit" "-DarchetypeArtifactId=blaze-persistence-archetype-entity-view-sample-jakarta" "-DarchetypeVersion=1.6.12"

Supported Java runtimes

All projects are built for Java 7 except for the ones where dependencies already use Java 8 like e.g. Hibernate 5.2, Spring Data 2.0 etc. So you are going to need a JDK 8 for building the project. The latest Java version we test and support is Java 21.

We also support building the project with JDK 9 and try to keep up with newer versions. If you want to run your application on a Java 9 JVM you need to handle the fact that JDK 9+ doesn't export the JAXB and JTA APIs anymore. In fact, JDK 11 removed the modules, so the command line flags to add modules to the classpath won't work.

Since libraries like Hibernate and others require these APIs you need to make them available. The easiest way to get these APIs back on the classpath is to package them along with your application. This will also work when running on Java 8. We suggest you add the following dependencies.

<dependency>
    <groupId>jakarta.xml.bind</groupId>
    <artifactId>jakarta.xml.bind-api</artifactId>
    <!-- Use version 3.0.1 if you want to use Jakarta EE 9 -->
    <version>2.3.3</version>
    <!-- In a managed environment like Java/Jakarta EE, use 'provided'. Otherwise use 'compile' -->
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>com.sun.xml.bind</groupId>
    <artifactId>jaxb-impl</artifactId>
    <!-- Use version 3.0.2 if you want to use Jakarta EE 9 -->
    <version>2.3.3</version>
    <!-- In a managed environment like Java/Jakarta EE, use 'provided'. Otherwise use 'compile' -->
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>jakarta.transaction</groupId>
    <artifactId>jakarta.transaction-api</artifactId>
    <!-- Use version 2.0.0 if you want to use Jakarta EE 9 -->
    <version>1.3.3</version>
    <!-- In a managed environment like Java/Jakarta EE, use 'provided'. Otherwise use 'compile' -->
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>jakarta.activation</groupId>
    <artifactId>jakarta.activation-api</artifactId>
    <!-- Use version 2.0.1 if you want to use Jakarta EE 9 -->
    <version>1.2.2</version>
    <!-- In a managed environment like Java/Jakarta EE, use 'provided'. Otherwise use 'compile' -->
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>jakarta.annotation</groupId>
    <artifactId>jakarta.annotation-api</artifactId>
    <!-- Use version 2.0.0 if you want to use Jakarta EE 9 -->
    <version>1.3.5</version>
    <!-- In a managed environment like Java/Jakarta EE, use 'provided'. Otherwise use 'compile' -->
    <scope>provided</scope>
</dependency>

The jakarta.transaction and jakarta.activation dependencies are especially relevant for the JPA metamodel generation.

Supported environments/libraries

The bare minimum is JPA 2.0. If you want to use the JPA Criteria API module, you will also have to add the JPA 2 compatibility module. Generally, we support the usage in Java EE 6+ or Spring 4+ applications.

See the following table for an overview of supported versions.

Module Minimum version Supported versions
Hibernate integration Hibernate 4.2 4.2, 4.3, 5.0+, 6.2+ (not all features are available in older versions)
EclipseLink integration EclipseLink 2.6 2.6 (Probably 2.4 and 2.5 work as well, but only tested against 2.6)
DataNucleus integration DataNucleus 4.1 4.1, 5.0
OpenJPA integration N/A (Currently not usable. OpenJPA doesn't seem to be actively developed anymore and no users asked for support yet)
Entity View CDI integration CDI 1.0 1.0, 1.1, 1.2, 2.0, 3.0
Entity View Spring integration Spring 4.3 4.3, 5.0, 5.1, 5.2, 5.3, 6.0
DeltaSpike Data integration DeltaSpike 1.7 1.7, 1.8, 1.9
Spring Data integration Spring Data 1.11 1.11 - 2.7, 3.1 - 3.3
Spring Data WebMvc integration Spring Data 1.11, Spring WebMvc 4.3 Spring Data 1.11 - 2.7, Spring WebMvc 4.3 - 5.3
Spring Data WebFlux integration Spring Data 2.0, Spring WebFlux 5.0 Spring Data 2.0 - 2.7, Spring WebFlux 5.0 - 5.3
Spring HATEOAS WebMvc integration Spring Data 2.2, Spring WebMvc 5.2 Spring Data 2.3+, Spring WebMvc 5.2+, Spring HATEOAS 1.0+
Jackson integration 2.8.11 2.8.11+
GraphQL integration 17.3 17.3+
JAX-RS integration Any JAX-RS version Any JAX-RS version
Quarkus integration 1.4.2 1.4+, 2.0+, 3.1+

Manual setup

For compiling you will only need API artifacts and for the runtime you need impl and integration artifacts.

See the core documentation for the necessary dependencies needed to setup Blaze-Persistence. If you want to use entity views, the entity view documentation contains a similar setup section describing the necessary dependencies.

Documentation

The current documentation is a reference manual and is split into a reference for the core module and for the entity-view module. At some point we might introduce topical documentation, but for now you can find articles on the Blazebit Blog

Core quick-start

First you need to create a CriteriaBuilderFactory which is the entry point to the core api.

CriteriaBuilderConfiguration config = Criteria.getDefault();
// optionally, perform dynamic configuration
CriteriaBuilderFactory cbf = config.createCriteriaBuilderFactory(entityManagerFactory);

NOTE: The CriteriaBuilderFactory should have the same scope as your EntityManagerFactory as it is bound to it.

For demonstration purposes, we will use the following simple entity model.

@Entity
public class Cat {
    @Id
    private Integer id;
    private String name;
    @ManyToOne(fetch = FetchType.LAZY)
    private Cat father;
    @ManyToOne(fetch = FetchType.LAZY)
    private Cat mother;
    @OneToMany
    private Set<Cat> kittens;
    // Getter and setters omitted for brevity
}

If you want to select all cats and fetch their kittens as well as their father you do the following.

cbf.create(em, Cat.class).fetch("kittens.father").getResultList();

This will create quite a query behind the scenes:

SELECT cat FROM Cat cat LEFT JOIN FETCH cat.kittens kittens_1 LEFT JOIN FETCH kittens_1.father father_1

An additional bonus is that the paths and generally every expression you write will get checked against the metamodel so you can spot typos very early.

JPA Criteria API quick-start

Blaze-Persistence provides an implementation of the JPA Criteria API what allows you to mostly code against the standard JPA Criteria API, but still be able to use the advanced features Blaze-Persistence provides.

All you need is a CriteriaBuilderFactory and when constructing the actual query, an EntityManager.

// This is a subclass of the JPA CriteriaBuilder interface
BlazeCriteriaBuilder cb = BlazeCriteria.get(criteriaBuilderFactory);
// A subclass of the JPA CriteriaQuery interface
BlazeCriteriaQuery<Cat> query = cb.createQuery(Cat.class);

// Do your JPA Criteria query logic with cb and query
Root<Cat> root = query.from(Cat.class);
query.where(cb.equal(root.get(Cat_.name), "Felix"));

// Finally, transform the BlazeCriteriaQuery to the Blaze-Persistence Core CriteriaBuilder
CriteriaBuilder<Cat> builder = query.createCriteriaBuilder(entityManager);
// From here on, you can use all the power of the Blaze-Persistence Core API

// And finally fetch the result
List<Cat> resultList = builder.getResultList();

This will create a query that looks just about what you would expect:

SELECT cat FROM Cat cat WHERE cat.name = :param_0

This alone is not very spectacular. The interesting part is that you can use the Blaze-Persistence CriteriaBuilder then to do your advanced SQL things or consume your result as entity views as explained in the next part.

Entity-view usage

Every project has some kind of DTOs and implementing these properly isn't easy. Based on the super quick-start model we will show how entity views come to the rescue!

To make use of entity views, you will need a EntityViewManager with entity view classes registered. In a CDI environment you can inject a EntityViewConfiguration that has all discoverable entity view classes registered, but in a normal Java application you will have to register the classes yourself like this:

EntityViewConfiguration config = EntityViews.createDefaultConfiguration();
config.addEntityView(CatView.class);
EntityViewManager evm = config.createEntityViewManager(criteriaBuilderFactory);

NOTE: The EntityViewManager should have the same scope as your EntityManagerFactory and CriteriaBuilderFactory as it is bound to it.

An entity view itself is a simple interface or abstract class describing the structure of the projection that you want. It is very similar to defining an entity class with the difference that it is based on the entity model instead of the DBMS model.

@EntityView(Cat.class)
public interface CatView {
    @IdMapping
    public Integer getId();

    @Mapping("CONCAT(mother.name, 's kitty ', name)")
    public String getCuteName();

    public SimpleCatView getFather();

}
@EntityView(Cat.class)
public interface SimpleCatView {
    @IdMapping
    public Integer getId();

    public String getName();

}

The CatView has a property cuteName which will be computed by the JPQL expression CONCAT(mother.name, 's kitty ', name) and a subview for father. Note that although not required in this particular case, every entity view for an entity type should have an id mapping if possible. Entity views without an id mapping will by default have equals and hashCode implementations that consider all attributes, whereas with an id mapping, only the id is considered. The SimpleCatView is the projection which is used for the father relation and only consists of the id and the name of the Cat.

You just created two DTO interfaces that contain projection information. Now the interesting part is that entity views can be applied on any query, so you can define a base query and then create the projection like this:

CriteriaBuilder<Cat> cb = criteriaBuilderFactory.create(entityManager, Cat.class);
cb.whereOr()
    .where("father").isNull()
    .where("father.name").like().value("Darth%").noEscape()
.endOr();
CriteriaBuilder<CatView> catViewBuilder = evm.applySetting(EntityViewSetting.create(CatView.class), cb);
List<CatView> catViews = catViewBuilder.getResultList();

This will behind the scenes execute the following optimized query and transparently build your entity view objects based on the results.

SELECT
    cat.id,
    CONCAT(mother_1.name, 's kitty ', cat.name),
    father_1.id,
    father_1.name
FROM Cat cat
LEFT JOIN cat.father father_1
LEFT JOIN cat.mother mother_1
WHERE father_1 IS NULL
   OR father_1.name LIKE :param_0

See the left joins created for relations used in the projection? These are implicit joins which are by default what we call "model-aware". If you specified that a relation is optional = false, we would generate an inner join instead. This is different from how JPQL path expressions are normally interpreted, but in case of projections like in entity views, this is just what you would expect! You can always override the join type of implicit joins with joinDefault if you like.

Questions or issues

Drop by on Zulip Chat and ask questions any time or just create an issue on GitHub or ask on Stackoverflow.

Commercial support

You can find commercial support offerings by Blazebit in the support section.

If you are a commercial customer and want to use commercial releases, you need to define the following repository in a profile of your project or the settings.xml located in ~/.m2.

<repository>
  <id>blazebit</id>
  <name>Blazebit</name>
  <url>https://nexus.blazebit.com/repository/maven-releases/</url>
</repository>

You also need to add the following server in the settings.xml with the appropriate credentials:

<server>
  <id>blazebit</id>
  <username>USERNAME</username>
  <password>PASSWORD</password>
</server>

Commercial customers also get access to the commercial repository where they access the source code of commercial releases, create issues that are treated with higher priority and browse commercial releases.

Using snapshots

To use the current snapshots which are published to the Sonatype OSS snapshot repository, you need to define the following repository in a profile of your project or the settings.xml located in ~/.m2.

<repository>
  <id>sonatype-snapshots</id>
  <name>Sonatype Snapshots</name>
  <url>https://oss.sonatype.org/content/repositories/snapshots/</url>
</repository>

Also see the Maven documentation for further details.

Setup local development

Here some notes about setting up a local environment for testing.

Setup general build environment

Although Blaze-Persistence still supports running on Java 7, the build must be run with at least JDK 8. When doing a release at least a JDK 9 is required as we need to build some Multi-Release or MR JARs. Since we try to support the latest JDK versions as well, we require developers that want to build the project with JDK 11+ to define a system property for a release build.

The system property jdk8.home should be set to the path to a Java 7 or 8 installation that contains either jre/lib/rt.jar or jre/lib/classes.jar. This property is necessary when using JDK 11+ because sun.misc.Unsafe.defineClass was removed.

Building the website and documentation

You have to install GraphViz and make it available in your PATH.

After that, it's easiest to just invoke ./serve-website.sh which builds the documentation, website and starts an embedded server to serve at port 8820.

Checkstyle in IntelliJ

  1. Build the whole thing with mvn clean install once to have the checkstyle-rules jar in your M2 repository
  2. Install the CheckStyle-IDEA Plugin
  3. After a restart, go to Settings > Other Settings > Checkstyle and configure version 9.3.0
  4. Add a Third party check that points to the checkstyle-rules.jar of your M2 repository
  5. Add a configuration file named Blaze-Persistence Checkstyle rules pointing to checkstyle-rules/src/main/resources/blaze-persistence/checkstyle-config.xml

Now you should be able to select Blaze-Persistence Checkstyle rules in the dropdown of the CheckStyle window. + Click on Check project and checkstyle will run once for the whole project, then it should do some work incrementally.

Testing a JPA provider and DBMS combination

By default, a Maven build mvn clean install will test against H2 and Hibernate 5.2 but you can activate different profiles to test other combinations. To test a specific combination, you need to activate at least 4 profiles

  • One of the JPA provider profiles
    • hibernate-6.6 + the jakarta profile
    • hibernate-6.5 + the jakarta profile
    • hibernate-6.4 + the jakarta profile
    • hibernate-6.3 + the jakarta profile
    • hibernate-6.2 + the jakarta profile
    • hibernate-5.6
    • hibernate-5.5
    • hibernate-5.4
    • hibernate-5.3
    • hibernate-5.2
    • hibernate-5.1
    • hibernate-5.0
    • hibernate-4.3
    • hibernate
    • eclipselink
    • datanucleus-5.1
    • datanucleus-5
    • datanucleus-4
    • openjpa
  • A DBMS profile
    • h2
    • postgresql
    • mysql
    • mysql8
    • oracle
    • db2
    • mssql
    • firebird
    • sqllite
  • A Spring data profile
    • spring-data-2.7.x
    • spring-data-2.6.x
    • spring-data-2.5.x
    • spring-data-2.4.x
    • spring-data-2.3.x
    • spring-data-2.2.x
    • spring-data-2.1.x
    • spring-data-2.0.x
    • spring-data-1.11.x
  • A DeltaSpike profile
    • deltaspike-1.9
    • deltaspike-1.8
    • deltaspike-1.7

The default DBMS connection infos are defined via Maven properties, so you can override them in a build by passing the properties as system properties.

  • jdbc.url
  • jdbc.user
  • jdbc.password
  • jdbc.driver

The values are defined in e.g. core/testsuite/pom.xml in the respective DBMS profiles.

For executing tests against a database on a dedicated host you might want to specify the following system property -DdbHost=192.168.99.100.

Testing with Jakarta Persistence provider

To build everything use mvn -pl core/testsuite-jakarta-runner clean install -am -P "hibernate-6.2,jakarta,h2,spring-data-2.6.x,deltaspike-1.9" -DskipTests and to run tests use mvn -pl core/testsuite-jakarta-runner clean install -P "hibernate-6.2,jakarta,h2,spring-data-2.6.x,deltaspike-1.9" "-Dtest=com.blazebit.persistence.testsuite.SetOperationTest#testUnionAllOrderBySubqueryLimit".

Switching JPA provider profiles in IntelliJ

When switching between Hibernate and other JPA provider profiles, IntelliJ does not unmark the basic or hibernate source directories in core/testsuite. If you encounter errors like duplicate class file found or something alike, make sure that

  • With a Hibernate profile you unmark the core/testsuite/src/main/basic directory as source root
  • With a non-Hibernate profile you unmark the core/testsuite/src/main/hibernate and core/testsuite/src/test/hibernate directory as source root

Unmarking as source root can be done by right clicking on the source directory, going to the submenu Mark directory as and finally clicking Unmark as Sources Root.

Using DataNucleus profiles in IntelliJ

DataNucleus requires bytecode enhancement to work properly which requires an extra step to be able to do testing within IntelliJ. Usually when switching the JPA provider profile, it is recommended to trigger a Rebuild Project action in IntelliJ to avoid strange errors causes by previous bytecode enhancement runs. After that, the entities in the project core/testsuite have to be enhanced. This is done through a Maven command.

  • DataNucleus 4: mvn -P "datanucleus-4,h2,deltaspike-1.8,spring-data-2.0.x" -pl core/testsuite,entity-view/testsuite,integration/spring-data/testsuite/webmvc,integration/spring-data/testsuite/webflux datanucleus:enhance
  • DataNucleus 5: mvn -P "datanucleus-5,h2,deltaspike-1.8,spring-data-2.0.x" -pl core/testsuite,entity-view/testsuite,integration/spring-data/testsuite/webmvc,integration/spring-data/testsuite/webflux datanucleus:enhance
  • DataNucleus 5.1: mvn -P "datanucleus-5.1,h2,deltaspike-1.8,spring-data-2.0.x" -pl core/testsuite,entity-view/testsuite,integration/spring-data/testsuite/webmvc,integration/spring-data/testsuite/webflux datanucleus:enhance

After doing that, you should be able to execute any test in IntelliJ.

Note that if you make changes to an entity class or add a new entity class you might need to redo the rebuild and enhancement.

Firebird

When installing the 3.x version, you also need a 3.x JDBC driver. Additionally you should add the following to the firebird.conf

WireCrypt = Enabled

After creating the DB with create database 'localhost:test' user 'sysdba' password 'sysdba';, you can connect with JDBC with jdbc:firebirdsql:localhost:test?charSet=utf-8

Oracle

When setting up Oracle locally, keep in mind that when you connect to it, you have to set the NLS_SORT to BINARY. Since the JDBC driver derives values from the locale settings of the JVM, you should set the default locale settings to en_US. In IntelliJ when defining the Oracle database, go to the Advanced tab an specify the JVM options -Duser.country=us -Duser.language=en.

GraalVM for native images with Quarkus

The general setup required for building native images with GraalVM is described in https://quarkus.io/guides/building-native-image.

  • Install GraalVM 20.2.0 (Java 11) and make sure you install the native-image tool and set GRAALVM_HOME environment variable
  • Install required packages for a C development environment

For example, run the following maven build to execute native image tests for H2:

mvn -pl examples/quarkus/testsuite/native/h2 -am integration-test -Pnative,h2,spring-data-2.7.x,deltaspike-1.9

Under Windows, make sure you run maven builds that use native image from the VS2017 native tools command line.

Website deployment

You can use build-deploy-website.sh to deploy to the target environment but need to configure in ~/.m2/settings.xml the following servers.

Id: staging-persistence.blazebit.com User/Password: user/****

Id: persistence.blazebit.com User/Password: user/****

Licensing

This distribution, as a whole, is licensed under the terms of the Apache License, Version 2.0 (see LICENSE.txt).

References

Project Site: https://persistence.blazebit.com

blaze-persistence's People

Contributors

andyjefferson avatar aniket-singla avatar beikov avatar benjaminbloomteq avatar berndartmueller avatar bramhaag avatar danelowe avatar david-kubecka avatar dependabot[bot] avatar dsarlo avatar eugenmayer avatar fk-currenxie avatar gsmet avatar jlleitschuh avatar jwgmeligmeyling avatar mobe91 avatar pilpin avatar postremus avatar pvncoder avatar riteshka avatar slyoldfox avatar sullrich84 avatar tapegram avatar thjanssen avatar tom-mayer avatar vladmihalcea avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blaze-persistence's Issues

EntityViewSetting subview support

Currently the EntityViewSetting class can only handle direct attributes. It should also be able to handle subviews and collections.

Implement OUTER() function in expressions

The OUTER() function is used within a subquery to retrieve the join alias for a given absolute path from the surrounding query.
Extract an ExpressionFactory interface with the createSimpleExpression method. The normal implementation should not support the OUTER() function. Only the new second implementation for subqueries should support the function.

Maybe remove QueryBuilder.select(java.lang.Class)

ObjectBuilders now need the JPA Metamodel and therefore we can not create them as we did before in the EntityViewExtension.
Either remove the method and auto-registration or introduce a possibility to make the persistence unit available to the extension.

For PaginatedCriteriaBuilder: throw exception for subqueries in order by that reference collection fields of the surrounding query

This requirement resulted from issue #38
An exception is required in this scenario because this is the only scenario in which the id query can result in multiple result set rows with the same root entity id. This is because we have to include the subquery in the select clause if it is used in the order by. Furthermore, if the subquery references a collection from the surrounding query it will cause the final result set to contain multiple rows for the same root entity.

FROM statements are illegal in JPQL

Currently we generate hibernate specific FROM-statements which is wrong because that is not valid JPQL. We can easily generate something like SELECT rootAlias FROM...

COUNT(*) syntax not allowed in JPQL

COUNT(*) is not allowed in JPQL, but Hibernate supports it. We should maybe switch to COUNT(rootAlias.id) which is what we actually want I guess.

Array access does not work as expected

Given following JPQ query:

SELECT d.id, contacts.name FROM Document d LEFT JOIN d.contacts contacts WITH KEY(contacts) = 1

Hibernate generates the following SQL query:

select
    document0_.id as col_0_0_,
    person2_.name as col_1_0_ 
from
        Document document0_ 
left outer join
        contacts contacts1_ 
            on document0_.id=contacts1_.Document_id 
left outer join
        Person person2_ 
            on contacts1_.contacts_id=person2_.id 
            and (
                contacts1_.contacts_KEY=1
            )

The first left outer join is performed because d.contacts uses a collection table. Hence, if one document has assigned two contacts this single join yields two rows for this document on the left side. Now, when the Person columns are joined we receive NULLs for the contact with KEY != 1.

Introduce flag that allows the generation of on clauses as where clauses

Hibernate currently only allows the usage of parameters, constants and the join alias in the with-clause of a join. As a workaround we could introduce a flag that allows the generation of with-clauses as where-clause.
Where clauses should look as follows:
OUTER JOIN relation ON condition
should result in
WHERE (relation.id IS NULL OR condition)

The NULL-check is only required for OUTER-Joins (i.e. all joins except INNER JOIN).

Make CriteriaBuilder context dependent

Create a class CriteriaFactory which has a context and can produce criteria builders.
This can be used to bind configuration originating from extensions to that context.

Support expressions around subqueries in order by for PaginatedCriteriaBuilder

We must support the following for id queries (we do not support this in the CriteriaBuilder or SubqueryBuilder):
e.g. selectSubquery("a").fromblabla.end().page(0,1).orderBySubquery("MAX(a)", true, false)

For the id query we must extract the wrapping expression for the order by clause and apply it to the corresponding select alias (which must correspond to a subquery). E.g we then have

SELECT id, MAX(SELECT bla FROM bla) AS a ORDER BY a

Restructure EntityView Extension

Currently EntityViewManagers have to be created by calling EntityViewManagerProvider.from(EntityManager). CriteriaBuilders are created via Criteria.from(EntityManager, Class, String).

In the future we could provide an extension, that looks for EntityManager beans, and creates beans for EntityViewManagers and CriteriaBuilders so that those instances can be injected.

Maybe we also need to restructure how we can create object builders.

Support expressions around subqueries in where, select, having

We currently have where().
Replace this with whereSubquery(String expression), whereSubquery(String subqueryAlias, String expression) and whereSubquery().

Replace the subquery alias in the expression with the subquery expression. The expression can be transformed right after the subquery builder terminates.

Test expression literals

JPQL defines several literals, check if the parser can properly handle them and which are support in hibernate because I think hibernate had problems with some literals.

Rework entity view filters

Currently filters for mapping attributes can be applied with @MappingFilter which accepts a Filter class.
Hibernate filters inspired me and I want to do something similar.

Since providing a Filter class is much more typesafe we will leave it like that, but maybe the Filter interface will become an abstract class that offers some basic functionality. The @MappingFilter annotation will maybe get a name parameter which is defaulted to the attribute name. Also the constructor contract defined in the Filter interface should be removed if possible and replaced by setParameter and getParameter methods on the Filter interface/abstract class. We need getParameter for introspection in EntityViewSetting as we don't know parameter names in general but mostly assume there is only one parameter that can be set. We could maybe also introduce a convention that single parameter filters should use the parameter name 'value' or even add another method that accepts only an Object for exactly that case. That method would do what is currently provided by the constructor, but the contract would be much cleaner.

Since a method attribute or view type can then have multiple filters, but only one default, we have to add that in the metamodel.

To make filters more flexible we might also want to consider a possibility to add more than a single restriction. Currently a RestrictionBuilder is passed into the filter which is fine most of the time, but sometimes we might want to apply functions on the left hand side too, or even use more complex predicates.

This will finally make them as mighty as hibernate filters(I think), but more typesafe.

Joining a path multiple times

Currently we can only join a path like "doc.localized.name" once, but it is sometimes necessary to join it multiple times with different aliases.

This is especially necessary for queries that use indexed collections.

If for example one would want to select two distinct values from a map attribute with different keys, the resulting JPQL Query could look like:

"SELECT o1, o2, FROM SomeType s LEFT JOIN s.map o1 LEFT JOIN s.map o2 WHERE KEY(o1) = 1 AND KEY(o2) = 2"

Note that this query in contrast to the next one, only returns a result, if both there are objects for both keys 1 and 2.

"SELECT o1, o2, FROM SomeType s LEFT JOIN s.map o1 LEFT JOIN s.map o2 WHERE KEY(o1) = 1 OR KEY(o2) = 2"

This query would return null for o2 if there was no entry with key 2.

When enabling this feature, please consider the different use cases.

AliasManager Hierarchy

We need a hierarchy because there could be two subqueries where equal aliases are allowed.

Implement expressions for subqueries in entity views

We will introduce methods in the criteria builder like xxxSubquery(String expression, String alias, ...) that make it possible to wrap an expression around a subquery. The MappingSubquery annotation and the metamodel have to be adapted. Also we have to adapt the current tuple element mappers for subqueries.

Add checks for invalid fetch joins

In JPA it is invalid to do join fetches when selecting something different than the root entity. Add checks to the criteria implementation so we don't get the error at querying time but before.

Evaluate getQueryString performance and maybe implement caching

Currently we always generate a fresh query when we call getQueryString. If the performance is bad, we could think about caching the query string by name.
A CriteriaBuilderFactory could offer a method

from(String queryName, EntityManager em, Class entityClass, String alias)

The queryName could act as a cache key.

Access multiple elements of the same collection within a single query

Referring to InterfaceViewTest.testInterface(), following query is generated so far:

SELECT contacts, d.id, contacts, d.name FROM Document d LEFT JOIN d.contacts contacts WHERE KEY(contacts) = 1 AND KEY(contacts) = :contactPersonNumber ORDER BY d.id ASC NULLS LAST

This results in an empty result set if :contactPersonNumber != 1
Instead the query should be built like this:

SELECT contacts1, d.id, contacts2, d.name FROM Document d LEFT JOIN d.contacts contacts1 LEFT JOIN d.contacts contacts2 WHERE KEY(contacts1) = 1 OR KEY(contacts2) = :contactPersonNumber ORDER BY d.id ASC NULLS LAST

Implement Case When

We have already prototyped the case when API, now it needs to be implemented and some more tests have to be added.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.