Git Product home page Git Product logo

spring-data-jpa-guide's People

Contributors

zhangzhenhuajack avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spring-data-jpa-guide's Issues

spring boot jpa 同表子级查询求助

我的查询涉及的三个关键表分别是:
Garden 园区表:--》House 房源表:--》Entrust寄租寄售委托表:,由于本地的地址复杂性,我们汕头这边比如一个金涛庄,下面有金涛庄东区、金涛庄西区、然后东西区下面还有华明花园、嵩苑、信苑。这些我都放在同一个Garden表里,需求是当用户输入“金涛”字串查询时,可以将涉及到有金涛两字以及下级花园小区的房源给查出来。

三大表具体如下:

Garden 园区表:
@entity
public class Garden {

@id
private long id;
private String name;
private long fatherId;//父级ID
private long level;
private String relationPath;//后来追加了一个类似“/父级ID/自己ID/" 的路径。

@OneToMany()
@joincolumn(name = "fatherId")
private List children = new ArrayList<>();

@OneToMany()
@joincolumn(name = "garden")
private List houseList = new ArrayList<>();

House 房源表:
@entity
public class House {

@id
private long id;
private long gardenId;
private long houseType1Id;
private long room;
private double area;
<其它省略>
@JsonManagedReference
@OneToMany(mappedBy="house")
private List entrustList;

@manytoone
@joincolumn(name = "gardenId",insertable=false, updatable=false)
@JsonBackReference
private Garden garden;

Entrust寄租寄售委托表:
@entity
public class Entrust {

@id
private long id;
private long houseId;
private double totalPrice;
private long isSale;

@manytoone
@joincolumn(name = "houseId",insertable = false,updatable = false)
@JsonBackReference
private House house;

JPA如何讲in查询返回的结果按in传入照顺序返回?

没找到非native的xiefa

直接用mysql native语法可以做到
SELECT * from models where id in (26612,26611,26610) order by field(id,26612,26611,26610);

@query(value = "select * from dress where id in (?1) and source = ?2 order by field(id,?3)", nativeQuery = true)
List queryDid(List dids, Integer code, List dids2);

ManyToMany 会产生一个中间表,如何对中间表加入其它字段 有什么方法么?

例如下面代码,中间表SYSTEM_USER_ROLE 表只会生成 "mid_user_id" 和 "mid_role_id" 两个字段作为联合主键,
如何在中间表中加入其它字段,如:“lastModifiedDate”,“createDate”,“version” 等

@ManyToMany(
            targetEntity = ManyToManyRoleDO.class,
            cascade = {CascadeType.REFRESH},
            fetch = FetchType.EAGER
    )
    @JoinTable(
            name = SYSTEM_USER_ROLE,
            joinColumns = {@JoinColumn(name = "mid_user_id", referencedColumnName = "user_id")},
            inverseJoinColumns = {@JoinColumn(name = "mid_role_id", referencedColumnName = "role_id")}
    )
    @JsonManagedReference
    private Set<ManyToManyRoleDO> roles;

接手一个Mybtisplus的项目,采用Jpa进行重构。在Service层有原来的Mybatis的访问数据库的代码。在替换过程中,是逐渐替换的,会存在在一个Service方法中,访问数据库的代码,有Jpa,与Mybatis共存的情况。

@transactional
@OverRide
//是根据每一条的Task及FileID来执行这一个分解为子任务的
public Result startRuleReview(Integer id) {
//1.校验
String key = RedisKeyConstants.FILE_REVIEW.concat(String.valueOf(id));
boolean lock = jedisUtils.lock(key, NumberConstants.THIRTY);
if (!lock) {
return Result.createError("任务锁定中");
}
//RuleReviewTask task = ruleReviewTaskMapper.selectByPrimaryKey(id);
JPARuleReviewTask task=jpaTaskRepository.getOne(id.longValue());

    if (!ReviewTaskStateEnum.WATI_RULE_REVIEW.getState().equals(task.getStatus())) {
        return Result.createError("任务状态非待规则审核");
    }

    //2.子任务创建
    ruleReviewDetailMapper.delByTaskId(id);
    //创建之前,在子任务中,先将该主任务的id关联的记录,先删除掉.这个是异常数据检查.这个暂时用Mybtis
    //检查一下,在一个Transaction下,2种混合应用,可能存在的问题。如果不用,业务上需要单独将子表中的相关记录删除,但是主表的记录不动。
    //下面的代码,整体上是将主任务根据manufactureid分解为子任务,并更新子任务的状态
    //根据这个机构选择的规则集合,这样选择就是持久化的选择集合。也可以是每次让用户来传入这个规则集合。
    List<RuleInfoDto>rules = ruleMapper.queryByDepart(task.getSysDepartId());

jpa:
generate-ddl: false
show-sql: true
hibernate:
ddl-auto: update
properties:
hibernate:
connection:
handling_mode: DELAYED_ACQUISITION_AND_HOLD
event:
merge:
entity_copy_observer: allow
open-in-view: false

问题描述
用户请求在碰到方法上的Tranaction注解的时候,会开启一个Session
然后在方法体中,
第一次 碰到Jpa dao层,开启事务,获取数据库连接,执行后不关闭 事务,但是没有关闭数据库连接。
第二次 碰到 Mybatis dao 层,根据事务传播机制,获取Jpa上获取的事务,获取数据库连接,执行后不关闭事务,没有关闭数据库连接。
方法体退出时候,关闭事务,关闭连接,关闭Session
这个理解是否正确,在jpa,与mybatis 2个dao层混用的情况下,原理得比较清楚,否则错误很难排查?

想请教下,表名不固定的时候应该怎么办

我们现在经常会遇到表的数量不固定,但是表结构是相同的。比如我们有一个表A记录了所有的运行商与表名的对应关系,每有一个新运营商接入我们系统,就增加一张表,并且在表A中插入一条记录 记录了运营商的名称,对应的表名,当这个运营商登录我们的系统,操作的就是它自己的表,这种场景应该怎么办呢,目前我们是在代码里手动拼接sql,感觉好麻烦啊

关于 AUTO 的 Flush 机制问题

老师,我看到您写的有两种情况会触发自动刷新:

  1. 事务 commit 之前,即指执⾏ transactionManager.commit() 之前都会触发
  2. 执⾏任何的 JPQL 或者 native SQL(代替直接操作 Entity 的⽅法)都会触发 flush

但是我在测试的时候,发现:
执行 count 查询的时候也会触发 AutoFlush 方法,请问这是为什么呢

    @Autowired
    private TransactionTemplate transactionTemplate;
    @Autowired
    private UserEntityRepository userEntityRepository;

    @Test
    @Transactional(propagation = Propagation.REQUIRES_NEW)
    @Rollback(value = false)
    void testCountAutoFlush() {
        Long userCount = transactionTemplate.execute(transactionStatus -> {
            User user = userRepository.save(User.builder().name("test").email("[email protected]").build());
            long count = userRepository.count();
            user.setName("age");
            return count;
        });
    }

Spring Data JPA利用@EntityGraph解决N+1的SQL查询问题: @OnetoOne @OneToMany

出现N+1的SQL的场景,我们有如下四个实体,核心内容如下:


import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.Setter;
import org.apache.commons.lang3.StringUtils;
import org.hibernate.annotations.*;

import javax.persistence.*;
import javax.persistence.Entity;
import javax.persistence.Table;
import java.time.Instant;
import java.util.HashSet;
import java.util.List;
import java.util.Set;

@Setter
@Getter
@EqualsAndHashCode(of = {"id"}, callSuper = true)
@Entity
@Table(name = "tpusers")
public class Tpuser {
  
    private String name;
    private String email;
    private String uuid;
    private Parent parent;
    private Teacher teacher;
    private List<ThirdPartyTpuser> thirdPartyTpusers;
    private Long id;

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    public Long getId() {
        return this.id;
    }

    @OneToMany(fetch = FetchType.LAZY, mappedBy = "tpuser")
    public List<ThirdPartyTpuser> getThirdPartyTpusers() {
        return thirdPartyTpusers;
    }

    @OneToOne(mappedBy = "tpuser")
    @Fetch(FetchMode.JOIN)
    public Parent getParent() {
        return parent;
    }

    @OneToOne(mappedBy = "tpuser")
    @Fetch(FetchMode.JOIN)
    public Teacher getTeacher() {
        return teacher;
    }

}

@Getter
@Setter
@Entity
@Table(name = "parents")
@Where(clause = "deleted = false")
public class Parent extends AbstractDeletedAuditBase {
    private String address;
    private Long state;
    private Tpuser tpuser;
    private Long id;

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    public Long getId() {
        return this.id;
    }

    @OneToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "tpuser_id", referencedColumnName = "id")
    public Tpuser getTpuser() {
        return tpuser;
    }

}

@Getter
@Setter
@EqualsAndHashCode(of = "tpuser_id")
@Entity
@Table(name = "teachers")
@Include(rootLevel = true, type = "Teachers")
@Where(clause = "deleted = false")
public class Teacher extends AbstractDeletedAuditBase {
    private Long areaId;
    private TeacherType type;
    private Tpuser trouser;
    private Long id;

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    public Long getId() {
        return this.id;
    }
    @Enumerated(EnumType.STRING)
    public TeacherType getType() {
        return type;
    }

    @OneToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "tpuser_id")
    public Tpuser getTpuser() {
        return tpuser;
    }

}
@Getter
@Setter
@Entity
@Table(name = "third_party_tpusers")
public class ThirdPartyTpuser extends AbstractVersionAuditBase {
    private String platform;
    private String openid;
    private String unionid;
    private Tpuser tpuser;
    private Long id;

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    public Long getId() {
        return this.id;
    }

    @ManyToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "uid")
    public Tpuser getTpuser() {
        return tpuser;
    }
}

也就说上面,上面四个四个实体之间的关系是Tpuser EAGER Parent AEGER teacher LAZY third_party_tpusers

那么我们发生的问题如下:

public interface TpuserRepository extends GenericUserRepository<Tpuser> {
    // @EntityGraph(attributePaths = {"thirdPartyTpusers","teacher","parent"}, type = EntityGraph.EntityGraphType.LOAD) 这个注解注释掉之后就会产生N+1的sql问题
    List<Tpuser> findAllByIdIn(Iterable<Long> ids);
}

当我们把@entitygraph注释掉之后,执行如下测试用例的时候就会发生N+1的sql问题

    @Test
    public void findByUuid() throws Exception {
        List<Tpuser> tpusers  = userRepository.findAllByIdIn(Lists.newArrayList(1L,2L));
        tpusers.forEach(tpuser -> {
            //我们利用 getId来模拟业务用到其它三个实体里面的值
            System.out.println(tpuser.getThirdPartyTpusers().get(0).getId());
            System.out.println(tpuser.getTeacher().getId());
            System.out.println(tpuser.getParent().getId());
        });
    }

N+1的sql现象如下:

2021-09-17 16:42:03.356 DEBUG [-,cebfb5f6a6b6a1c9,cebfb5f6a6b6a1c9,true] 40539 --- [nio-9000-exec-1] org.hibernate.SQL                        : select tpuser0_.id as id1_24_, tpuser0_.created_at as created_2_24_, tpuser0_.updated_at as updated_3_24_, tpuser0_.lock_version as lock_ver4_24_, tpuser0_.auto_generate as auto_gen5_24_, tpuser0_.email as email6_24_, tpuser0_.gender as gender7_24_, tpuser0_.invitation_code_group as invitati8_24_, tpuser0_.invited_by_code as invited_9_24_, tpuser0_.mobile_phone as mobile_10_24_, tpuser0_.mobile_phone_validated as mobile_11_24_, tpuser0_.name as name12_24_, tpuser0_.password_hash as passwor13_24_, tpuser0_.password_updated_at as passwor14_24_, tpuser0_.state as state15_24_, tpuser0_.uuid as uuid16_24_ from tpusers tpuser0_ where tpuser0_.id in (? , ?)

2021-09-17 16:42:03.480 TRACE [-,cebfb5f6a6b6a1c9,cebfb5f6a6b6a1c9,true] 40539 --- [nio-9000-exec-1] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([uuid16_24_] : [VARCHAR]) - [81164fff-4184-47c3-84a5-d44e71400bd4]
2021-09-17 16:42:03.500 DEBUG [-,cebfb5f6a6b6a1c9,cebfb5f6a6b6a1c9,true] 40539 --- [nio-9000-exec-1] org.hibernate.SQL                        : select parent0_.id as id1_13_0_, parent0_.created_at as created_2_13_0_, parent0_.updated_at as updated_3_13_0_, parent0_.lock_version as lock_ver4_13_0_, parent0_.deleted as deleted5_13_0_, parent0_.deleted_at as deleted_6_13_0_, parent0_.address as address7_13_0_, parent0_.state as state8_13_0_, parent0_.tpuser_id as tpuser_i9_13_0_ from parents parent0_ where parent0_.tpuser_id=? and ( parent0_.deleted = 0) 

2021-09-17 16:42:03.545 DEBUG [-,cebfb5f6a6b6a1c9,cebfb5f6a6b6a1c9,true] 40539 --- [nio-9000-exec-1] org.hibernate.SQL                        : select teacher0_.id as id1_15_0_, teacher0_.created_at as created_2_15_0_, teacher0_.updated_at as updated_3_15_0_, teacher0_.lock_version as lock_ver4_15_0_, teacher0_.deleted as deleted5_15_0_, teacher0_.deleted_at as deleted_6_15_0_, teacher0_.address as address7_15_0_, teacher0_.area_id as area_id8_15_0_, teacher0_.last_login_date as last_log9_15_0_, teacher0_.state as state10_15_0_, teacher0_.tpuser_id as tpuser_12_15_0_, teacher0_.type as type11_15_0_ from teachers teacher0_ where teacher0_.tpuser_id=? and ( teacher0_.deleted = 0) 

2021-09-17 16:42:03.581 DEBUG [-,cebfb5f6a6b6a1c9,cebfb5f6a6b6a1c9,true] 40539 --- [nio-9000-exec-1] org.hibernate.SQL                        : select parent0_.id as id1_13_0_, parent0_.created_at as created_2_13_0_, parent0_.updated_at as updated_3_13_0_, parent0_.lock_version as lock_ver4_13_0_, parent0_.deleted as deleted5_13_0_, parent0_.deleted_at as deleted_6_13_0_, parent0_.address as address7_13_0_, parent0_.state as state8_13_0_, parent0_.tpuser_id as tpuser_i9_13_0_ from parents parent0_ where parent0_.tpuser_id=? and ( parent0_.deleted = 0) 
2021-09-17 16:42:03.622 DEBUG [-,cebfb5f6a6b6a1c9,cebfb5f6a6b6a1c9,true] 40539 --- [nio-9000-exec-1] org.hibernate.SQL                        : select teacher0_.id as id1_15_0_, teacher0_.created_at as created_2_15_0_, teacher0_.updated_at as updated_3_15_0_, teacher0_.lock_version as lock_ver4_15_0_, teacher0_.deleted as deleted5_15_0_, teacher0_.deleted_at as deleted_6_15_0_, teacher0_.address as address7_15_0_, teacher0_.area_id as area_id8_15_0_, teacher0_.last_login_date as last_log9_15_0_, teacher0_.state as state10_15_0_, teacher0_.tpuser_id as tpuser_12_15_0_, teacher0_.type as type11_15_0_ from teachers teacher0_ where teacher0_.tpuser_id=? and ( teacher0_.deleted = 0) 
2021-09-17 16:42:03.623 TRACE [-,cebfb5f6a6b6a1c9,cebfb5f6a6b6a1c9,true] 40539 --- [nio-9000-exec-1] o.h.type.descriptor.sql.BasicBinder      : binding parameter [1] as [BIGINT] - [8991696]
2021-09-17 16:42:03.768 DEBUG [-,cebfb5f6a6b6a1c9,cebfb5f6a6b6a1c9,true] 40539 --- [nio-9000-exec-1] org.hibernate.SQL                        : select thirdparty0_.uid as uid15_18_1_, thirdparty0_.id as id1_18_1_, thirdparty0_.id as id1_18_0_, thirdparty0_.created_at as created_2_18_0_, thirdparty0_.updated_at as updated_3_18_0_, thirdparty0_.lock_version as lock_ver4_18_0_, thirdparty0_.avatar_url as avatar_u5_18_0_, thirdparty0_.city as city6_18_0_, thirdparty0_.country as country7_18_0_, thirdparty0_.nickname as nickname8_18_0_, thirdparty0_.openid as openid9_18_0_, thirdparty0_.platform as platfor10_18_0_, thirdparty0_.province as provinc11_18_0_, thirdparty0_.sex as sex12_18_0_, thirdparty0_.uid as uid15_18_0_, thirdparty0_.unionid as unionid13_18_0_, thirdparty0_.uuid as uuid14_18_0_ from third_party_tpusers thirdparty0_ where thirdparty0_.uid in (?, ?)

问题总结:

也就是当我们查询2条tpuser的时候就会产生6条SQL,而最后的third_party_tpusers 只生成了一条sql是因为我们配置了spring.jpa.properties.hibernate.default_batch_fetch_size=50 但是fetch_size解决不了 @OnetoOne的N+1的SQL问题。

《Spring Data JPA 实战》

课程介绍:

http://gitbook.cn/gitchat/column/5ab9bfd5c864031e9f8301bd

《Spring Data JPA 实战》内容是基于作者学习和工作中实践的总结和升华,有一句经典的话:“现在的开发人员是站在巨人的肩上,弯道超车”。因现在框架越来越优秀,减少了很多问题和工作量,如果还没有学习 Spring Data JPA 建议赶快了解一下。随着 Java 技术和微服务技术逐渐的广泛的应用,Spring Cloud、Spring Boot 逐渐统一 Java 的框架江湖。市场上的 ORM 框架也逐渐被人重视起来,而 Spring Data 逐渐走入 Java 开发者的视野,被越来越多的架构师作为 ORM 的技术选型方向。

本课的内容分为基础、进阶和深入,对 Spring Data JPA 的使用、手册、实战、源码分析等进行全面的讲解。基础部分内容包括了:整体认识 JPA,从 JPA 基础查询方法出发、定义查询方法(Defining Query Methods)、注解式查询方法,并一步一步进阶之深入部分:@entity 实例里面常用注解详解、JpaRepository 详解、QueryByExampleExecutor 和 JpaSpecificationExecutor 复杂使用案例和语法、JPA 的 MVC 扩展 Rest 支持、DataSource 源码分析(内存数据库、多数据源)、乐观锁等。

所选的技术版本都是基于 Spring Boot 2.0 来讲解的,选择学习本课程内容,你已经在大多数开发人员领先一步。

作者介绍
张振华,曾经先后在驴妈妈、携程、要买车公司担任过 Java 高级工程师、架构师、开发主管、技术经理等职务。在电商公司工作期间,负责过 PC 站和后端服务的平台架构的实现和升级。目前在做一些 Java 架构工作。前后从业十几年没有离开 Java 开发,2015年出版《Java 并发编程从入门到精通》图书,2018年出版《Spring Data JPA 从入门到精通》一书。

课程内容
第01课:整体认识 JPA
已读完
第02课:JPA 基础查询方法 JpaRepository 详解
已读完
第03课:定义查询方法(Defining Query Methods)
已读完
第04课:注解式查询方法
已读完
第05课:@entity 实例里面常用注解详解
已读完
第06课:JpaRepository 扩展之 QueryByExampleExecutor
已读完
第07课:JpaRepository 扩展之 JpaSpecificationExecutor
已读完
第08课:JpaRepository 扩展之自定义 Repository
已读完
第09课:Auditing 与 @Version
已读完
第10课:对 MVCWeb 的支持分页和排序的支持
已读完
第11课:Spring Data JPA 的配置之 SpringBoot 2.0 加载详解
已读完
第12课:DataSource 的配置与事务详解、多数据源
已读完
第13课:Spring Data JPA 之 QueryDSL 支持
已读完

经测试@Query等于nativeSql的时候projections是DTO的时候不能正确映射

参考:
https://www.baeldung.com/jpa-queries-custom-result-with-aggregation-functions
确实没有提到DTO的nativeSql的用法;

但看Hibernate感觉应该是支持才对。
https://thoughts-on-java.org/dto-projections/
难道要配置@SqlResultSetMapping:
https://stackoverflow.com/questions/29082749/spring-data-jpa-map-the-native-query-result-to-non-entity-pojo
这是个什么原理和机制有待研究......

缓存不生效

 1: findById() 测试有缓存
 2: findAll() 测试没有缓存 
 3: 使用命名方法的方式查询 比如 findBookByBookName() 测试 也无没有缓存 
 4:使用@Query注解查询  测试也没有缓存 

是不是级缓存和二级缓存 只有在findById的情况下 才会生效呢 

JPA实现多租户的方式

老师,在SAAS服务中,有多租户的要求。JPA中在框架层面有多租户的处理方式吗? 如果没有,那么它的处理大概思路应该如何?谢谢

数据库里面的枚举值到实体里面的枚举值 unknown 转化的时候的一些特殊做法 @Convert用法

see https://www.baeldung.com/jpa-persisting-enums-in-jpa

  1. Introduction
    In JPA version 2.0 and below, there's no convenient way to map Enum values to a database column. Each option has its limitations and drawbacks. These issues can be avoided by using JPA 2.1. features.

In this tutorial, we'll take a look at the different possibilities we have to persist enums in a database using JPA. We'll also describe their advantages and disadvantages as well as provide simple code examples.

  1. Using @Enumerated Annotation
    The most common option to map an enum value to and from its database representation in JPA before 2.1. is to use the @Enumerated annotation. This way, we can instruct a JPA provider to convert an enum to its ordinal or String value.

We'll explore both options in this section.

But first, let's create a simple @entity that we'll be using throughout this tutorial:

@entity
public class Article {
@id
private int id;

private String title;

// standard constructors, getters and setters

}
2.1. Mapping Ordinal Value
If we put the @Enumerated(EnumType.ORDINAL) annotation on the enum field, JPA will use the Enum.ordinal() value when persisting a given entity in the database.

Let's introduce the first enum:

public enum Status {
OPEN, REVIEW, APPROVED, REJECTED;
}
Next, let's add it to the Article class and annotate it with @Enumerated(EnumType.ORDINAL):

@entity
public class Article {
@id
private int id;

private String title;

@Enumerated(EnumType.ORDINAL)
private Status status;

}
Now, when persisting an Article entity:

Article article = new Article();
article.setId(1);
article.setTitle("ordinal title");
article.setStatus(Status.OPEN);
JPA will trigger the following SQL statement:

insert
into
Article
(status, title, id)
values
(?, ?, ?)
binding parameter [1] as [INTEGER] - [0]
binding parameter [2] as [VARCHAR] - [ordinal title]
binding parameter [3] as [INTEGER] - [1]
A problem with this kind of mapping arises when we need to modify our enum. If we add a new value in the middle or rearrange the enum's order, we'll break the existing data model.

Such issues might be hard to catch, as well as problematic to fix, as we would have to update all the database records.

2.2. Mapping String Value
Analogously, JPA will use the Enum.name() value when storing an entity if we annotate the enum field with @Enumerated(EnumType.STRING).

Let's create the second enum:

public enum Type {
INTERNAL, EXTERNAL;
}
And let's add it to our Article class and annotate it with @Enumerated(EnumType.STRING):

@entity
public class Article {
@id
private int id;

private String title;

@Enumerated(EnumType.ORDINAL)
private Status status;

@Enumerated(EnumType.STRING)
private Type type;

}
Now, when persisting an Article entity:

Article article = new Article();
article.setId(2);
article.setTitle("string title");
article.setType(Type.EXTERNAL);
JPA will execute the following SQL statement:

insert
into
Article
(status, title, type, id)
values
(?, ?, ?, ?)
binding parameter [1] as [INTEGER] - [null]
binding parameter [2] as [VARCHAR] - [string title]
binding parameter [3] as [VARCHAR] - [EXTERNAL]
binding parameter [4] as [INTEGER] - [2]
With @Enumerated(EnumType.STRING), we can safely add new enum values or change our enum's order. However, renaming an enum value will still break the database data.

Additionally, even though this data representation is far more readable compared to the @Enumerated(EnumType.ORDINAL) option, it also consumes a lot more space than necessary. This might turn out to be a significant issue when we need to deal with a high volume of data.

  1. Using @PostLoad and @PrePersist Annotations
    Another option we have to deal with persisting enums in a database is to use standard JPA callback methods. We can map our enums back and forth in the @PostLoad and @PrePersist events.

The idea is to have two attributes in an entity. The first one is mapped to a database value, and the second one is a @transient field that holds a real enum value. The transient attribute is then used by the business logic code.

To better understand the concept, let's create a new enum and use its int value in the mapping logic:

public enum Priority {
LOW(100), MEDIUM(200), HIGH(300);

private int priority;

private Priority(int priority) {
    this.priority = priority;
}

public int getPriority() {
    return priority;
}

public static Priority of(int priority) {
    return Stream.of(Priority.values())
      .filter(p -> p.getPriority() == priority)
      .findFirst()
      .orElseThrow(IllegalArgumentException::new);
}

}
We've also added the Priority.of() method to make it easy to get a Priority instance based on its int value.

Now, to use it in our Article class, we need to add two attributes and implement callback methods:

@entity
public class Article {

@Id
private int id;

private String title;

@Enumerated(EnumType.ORDINAL)
private Status status;

@Enumerated(EnumType.STRING)
private Type type;

@Basic
private int priorityValue;

@Transient
private Priority priority;

@PostLoad
void fillTransient() {
    if (priorityValue > 0) {
        this.priority = Priority.of(priorityValue);
    }
}

@PrePersist
void fillPersistent() {
    if (priority != null) {
        this.priorityValue = priority.getPriority();
    }
}

}
Now, when persisting an Article entity:

Article article = new Article();
article.setId(3);
article.setTitle("callback title");
article.setPriority(Priority.HIGH);
JPA will trigger the following SQL query:

insert
into
Article
(priorityValue, status, title, type, id)
values
(?, ?, ?, ?, ?)
binding parameter [1] as [INTEGER] - [300]
binding parameter [2] as [INTEGER] - [null]
binding parameter [3] as [VARCHAR] - [callback title]
binding parameter [4] as [VARCHAR] - [null]
binding parameter [5] as [INTEGER] - [3]
Even though this option gives us more flexibility in choosing the database value's representation compared to previously described solutions, it's not ideal. It just doesn't feel right to have two attributes representing a single enum in the entity. Additionally, if we use this type of mapping, we aren't able to use enum's value in JPQL queries.

  1. Using JPA 2.1 @converter Annotation
    To overcome the limitations of the solutions shown above, JPA 2.1 release introduced a new standardized API that can be used to convert an entity attribute to a database value and vice versa. All we need to do is to create a new class that implements javax.persistence.AttributeConverter and annotate it with @converter.

Let's see a practical example. But first, as usual, we'll create a new enum:

public enum Category {
SPORT("S"), MUSIC("M"), TECHNOLOGY("T");

private String code;

private Category(String code) {
    this.code = code;
}

public String getCode() {
    return code;
}

}
We also need to add it to the Article class:

@entity
public class Article {

@Id
private int id;

private String title;

@Enumerated(EnumType.ORDINAL)
private Status status;

@Enumerated(EnumType.STRING)
private Type type;

@Basic
private int priorityValue;

@Transient
private Priority priority;

private Category category;

}
Now, let's create a new CategoryConverter:

@converter(autoApply = true)
public class CategoryConverter implements AttributeConverter<Category, String> {

@Override
public String convertToDatabaseColumn(Category category) {
    if (category == null) {
        return null;
    }
    return category.getCode();
}

@Override
public Category convertToEntityAttribute(String code) {
    if (code == null) {
        return null;
    }

    return Stream.of(Category.values())
      .filter(c -> c.getCode().equals(code))
      .findFirst()
      .orElseThrow(IllegalArgumentException::new);
}

}
We've set the @converter‘s value of autoApply to true so that JPA will automatically apply the conversion logic to all mapped attributes of a Category type. Otherwise, we'd have to put the @converter annotation directly on the entity's field.

Let's now persist an Article entity:

Article article = new Article();
article.setId(4);
article.setTitle("converted title");
article.setCategory(Category.MUSIC);
Then JPA will execute the following SQL statement:

insert
into
Article
(category, priorityValue, status, title, type, id)
values
(?, ?, ?, ?, ?, ?)
Converted value on binding : MUSIC -> M
binding parameter [1] as [VARCHAR] - [M]
binding parameter [2] as [INTEGER] - [0]
binding parameter [3] as [INTEGER] - [null]
binding parameter [4] as [VARCHAR] - [converted title]
binding parameter [5] as [VARCHAR] - [null]
binding parameter [6] as [INTEGER] - [4]
As we can see, we can simply set our own rules of converting enums to a corresponding database value if we use the AttributeConverter interface. Moreover, we can safely add new enum values or change the existing ones without breaking the already persisted data.

The overall solution is simple to implement and addresses all the drawbacks of the options presented in the earlier sections.

  1. Using Enums in JPQL
    Let's now see how easy it is to use enums in the JPQL queries.

To find all Article entities with Category.SPORT category, we need to execute the following statement:

String jpql = "select a from Article a where a.category = com.baeldung.jpa.enums.Category.SPORT";

List

articles = em.createQuery(jpql, Article.class).getResultList();
It's important to note, that in this case, we need to use a fully qualified enum name.

Of course, we're not limited to static queries. It's perfectly legal to use the named parameters:

String jpql = "select a from Article a where a.category = :category";

TypedQuery

query = em.createQuery(jpql, Article.class);
query.setParameter("category", Category.TECHNOLOGY);

List

articles = query.getResultList();
The above example presents a very convenient way to form dynamic queries.

Additionally, we don't need to use fully qualified names.

  1. Conclusion
    In this tutorial, we've covered various ways of persisting enum values in a database. We've presented options we have when using JPA in version 2.0 and below, as well as a new API available in JPA 2.1 and above.

It's worth noting that these aren't the only possibilities to deal with enums in JPA. Some databases, like PostgreSQL, provide a dedicated column type to store enum values. However, such solutions are outside the scope of this article.

As a rule of thumb, we should always use the AttributeConverter interface and @converter annotation if we're using JPA 2.1 or later.

如果数据库里面的是未知枚举映射的实体里面的时候建议用String 接收数据库里面的值,业务运算通过另个一个字段 返回枚举做特殊处理

日志如何开启

开启事务日志:

logging.level.org.springframework.transaction.interceptor=TRACE

输出格式如下:

2012-08-22 18:50:00,031 TRACE - Getting transaction for [com.MyClass.myMethod]

[my own log statements from method com.MyClass.myMethod]

2012-08-22 18:50:00,142 TRACE - Completing transaction for [com.MyClass.myMethod]

Session Metrics

# Generate and log statistics
spring.jpa.properties.hibernate.generate_statistics=true
logging.level.org.hibernate.stat=DEBUG
15:37:21,964 DEBUG [org.hibernate.stat.internal.StatisticsImpl] - HHH000117: HQL: SELECT a FROM Author a WHERE a.lastName = :lastName, time: 26ms, rows: 1
15:37:21,972 INFO  [org.hibernate.engine.internal.StatisticalLoggingSessionEventListener] - Session Metrics {
    51899 nanoseconds spent acquiring 1 JDBC connections;
    30200 nanoseconds spent releasing 1 JDBC connections;
    419199 nanoseconds spent preparing 6 JDBC statements;
    21482801 nanoseconds spent executing 6 JDBC statements;
    0 nanoseconds spent executing 0 JDBC batches;
    0 nanoseconds spent performing 0 L2C puts;
    0 nanoseconds spent performing 0 L2C hits;
    0 nanoseconds spent performing 0 L2C misses;
    390499 nanoseconds spent executing 1 flushes (flushing a total of 2 entities and 2 collections);
    40233400 nanoseconds spent executing 1 partial-flushes (flushing a total of 2 entities and 2 collections)
}

动态更新非null字段

只更新非null字段

/**

 * @param user

 * @return

 */

@PostMapping("/user/notnull")

public User saveUserNotNullProperties(@RequestBody User user) {

   //数据库里面取出最新的数据,当然了这一步严谨一点可以根据id和version来取数据,如果没取到可以报乐观锁异常

   User userSrc = userRepository.findById(user.getId()).get();

   //将不是null的字段copy到userSrc里面,我们只更新传递了不是null的字段

   PropertyUtils.copyNotNullProperty(user,userSrc);

   return userRepository.save(userSrc);

}

package com.example.jpa.example1.util;



import com.google.common.collect.Sets;

import org.springframework.beans.BeanUtils;

import org.springframework.beans.BeanWrapper;

import org.springframework.beans.BeanWrapperImpl;



import java.util.Set;



public class PropertyUtils {



    /**

     * 只copy非null字段

     *

     * @param source

     * @param dest

     */

    public static void copyNotNullProperty(Object source, Object dest) {

        //利用spring提供的工具类忽略为null的字段

        BeanUtils.copyProperties(source, dest, getNullPropertyNames(source));

    }



    /**

     * get property name that value is null

     *

     * @param source

     * @return

     */

    private static String[] getNullPropertyNames(Object source) {

        final BeanWrapper src = new BeanWrapperImpl(source);

        java.beans.PropertyDescriptor[] pds = src.getPropertyDescriptors();



        Set<String> emptyNames = Sets.newHashSet();

        for (java.beans.PropertyDescriptor pd : pds) {

            Object srcValue = src.getPropertyValue(pd.getName());

            if (srcValue == null) {

                emptyNames.add(pd.getName());

            }

        }

        String[] result = new String[emptyNames.size()];

        return emptyNames.toArray(result);

    }

}

《Spring Data JPA入门到精通》目录

《Spring Data JPA从入门到精通》购买地址:

天猫: https://s.click.taobao.com/OsToiQw

当当:http://product.dangdang.com/1295191369.html

京东:https://item.jd.com/12350823.html
image

本书初衷

随着Java技术和微服务技术逐渐广泛应用,Spring Cloud、Spring Boot逐渐统一Java的框架江湖。市场上的ORM框架也逐渐被人重视起来。Spring Data逐渐走入Java开发者的视野,被很多架构师作为ORM框架的技术选型。市场上没有对Spring Data JPA的完整介绍。资料比较零散,很难一下子全面、深入地掌握Spring Data JPA。本书注重从实际出发来提高从事Java开发者的工作效率,可以作为一本很好的自我学习手册和Spring Data JPA的查阅手册。“不仅授之以鱼,还授之以渔”,不仅告诉大家是什么、怎么用,还告诉大家学习步骤、怎么学习,以及原理、使用技巧与实践。全书以Spring Boot为技术基础,从入门到精通,由浅入深地介绍和使用Spring Data JPA,很适合Java的初学者从此弯道超车,走上Spring全家桶学习的快车道。

“未来已经来临,只是尚未流行”

纵观市场上的ORM框架,MyBitas以灵活著称,但是要维护复杂的配置,并且不是Spring官方的天然全家桶,还得做额外的配置工作,即使是资深的架构师也得做很多封装;Hibernate以HQL和关系映射著称,但是使用起来不是特别灵活。这样Spring Data JPA来了,感觉要夺取ORM的JPA霸主地位了,它底层以Hibernate为封装,对外提供了超级灵活的使用接口,又非常符合面向对象和REST的风格,越来越多的API层面的封装都是以Spring Data JPA为基础的,感觉是架构师和开发者的福音。Spring Data JPA与Spring Boot配合起来使用具有天然的优势,你会发现越来越多的公司招聘会由传统的SSH、Spring、MyBitas技术要求逐步地变为Spring Boot、Spring Cloud、Spring Data等Spring 全家桶技术的要求。

追本溯源

架构师在架构设计系统之前都要先设计各种业务模型、数据模型,其实在众多技术框架中,要掌握Spring Boot、Spring MVC、Spring Cloud、微服务架构等,都离不开底层数据库操作层,如果我们能很好地掌握Data这层的技术要领,从下往上学习,这样可能会更好掌握一些。

本书特色

(1)本书针对Java开发者、Spring的使用者,是Spring Data JPA开发必备书籍。

(2)本书从介绍到使用再到原理和实践,可以作为一本很好的Spring Data JPA的实战手册。

(3)本书的代码清晰,迭代完整,便于全面、完整地掌握和学习JPA。

(4)本书注重从实战经验方面进行讲解,非常实用,一点即破。

(5)本书原型PPT深受同事喜爱,并在企业内部培训的时候得到了很多Java程序员的肯定。

阅读指南

本书以Spring Boot为开发基础和线索,大量采用了UML释义的讲解方式。本书分为3个部分,共12章。

(1)基础部分:整体认识JPA、JPA基础查询方法、定义查询方法、注解式查询方法、@entity实例里面常用注解详解,了解Spring Data JPA的基本使用和语法。

(2)晋级之高级部分:JpaRepository详解、JPA的MVC扩展Rest支持、DataSource的配置、乐观锁等,了解其背后的实现动机及其原理。

(3)延展部分:SpEL表达式在Spring Data里面的应用、Spring Data Redis实现cacheable的实践、IntelliJ IDEA加快开发效率、Spring Data Rest的介绍,直至整个Spring Data的生态。

另外,由于Spring Boot 2.0的版本Spring Data JPA有了一些变化,作者对Spring Boot 2.0中的JPA

行业大神精彩书评
Spring Data 在国内是一个严重被低估的技术,自然相关的讨论也淡出大家的视野,开发人员更习惯于使用 MyBatis 或者 Hibernate 等 ORM 框架来操作关系型数据,却忽略 NoSQL 的整合,然而 Spring Data 的出现弥补了这个方面的遗憾。本书虽名为 Spring Data JPA,可是也为读者深入介绍 Spring Data 抽象设计以及扩展。通过案例分析和实现原理,帮助开发人员了解 Spring Data 的全貌,更为重要是让读者理解 JPA 规范的重要性。

--------------------------- 小马哥(马昕曦),阿里巴巴技术专家

Spring Data进一步简化了Java访问SQL和NoSQL数据源的复杂度。本书详细介绍了Spring Data JPA框架的知识,是一本很好的学习参考书籍。
----Mongodb官方团队《Mongodb实战》第2版译者 徐雷

本书从浅入深,从原理剖析到经验结合,直观的把 spring data jpa 和周边功能展现给了读者,相信从 java 初学者到经验老道的架构师读此书都能有所收获。
--------------------------- 资深java老兵 林晓辉

Spring发展到现在已经是Java应用开发毕本的基础设施了,而且遵循它一贯的风格,孵化出一系列优秀的解决方案,如Spring Boot、Spring Data、Spring Cloud等,每一个解决方案都完全遵循了Spring的设计理念;
Spring Data Jpa在开发企业级应用时有其独特的优势,能帮助开发人员快速的进行各种数据库到Java模型的映射,帮我们进行快速的业务逻辑开发,而无需关心数据映射的一些细节。
我也曾经使用Spring Data Jpa开发过一个JavaEE项目开发脚手架ES项目,使用Spring Data Jpa能快速的帮助我完成项目DAO层的开发。强烈推荐大家在开发企业级应用时使用Spring Data,本书能让读者从入门到灵活运用,值得一读。

-------------------《亿级流量网站架构核心技术》书作者 张开涛

作为一个Java老程序员,2000年开始接触Java,2003年开始用Struts + MySQL做Java Web开发。刚开始的时候直接用JDBC访问数据库,影响比较深刻的是当时有大量的时间花在写SQL和处理结果集上。那个时候数据库设计是程序设计的很重要部分,一般都是先做数据库设计(例如用Power Designer做ER模型),然后再写程序。数据库设计除了表的设计之外,还会涉及到视图,触发器和存储过程等。到了2004年,Hibernate 1.0横空出世,当时身边有个大神同学(尹俊,目前就职于美国Google)花了一个月时间通读了文档和源码,给大家做了讲解,大家讨论之后决定将Hibernate引入到项目中,吃一下螃蟹。从此我就开始和ORM打起了交道。
ORM最大的好处就是让程序员关注在业务本身以及对应OO(面向对象)程序设计,这个更加契合领域设计和OO设计,而不是一开始陷入到数据库细节层面,影响总体设计。学过领域设计的同学都知道里面有关于Entity,Repository,Service等相关的概念,而JPA则很好的实现了这些概念。Spring Data JPA出现之后,则更加简化了我们访问数据库的方式。你只要花费1分钟,定义一个实体类(加上Entity注解)和扩展一个CrudRepository的接口,就可以具备对单表CRUD操作的基本功能。
在2016的一个实际项目中,我们在Spring Data JPA的基础上,实现了很多功能,例如字段自动加解秘,字段JSON与POJO自动映射,历史表(审计功能),自动设置创建时间/更新时间,乐观锁/悲观锁等,收获颇多。
虽然经常使用Spring Data JPA,但是基本上都是遇到问题现查文档,缺少一本提纲挈领,循序渐进完整讲解Spring Data JPA的书。而振华老弟年纪不大,却很爱专研技术,算得上是Spring Data JPA的专家,而他写的这本书正好满足了我以及广大Java程序员的需求,学习Spring Data JPA不在枯燥,同时非常翔实,完整的讲解了Spring Data JPA,并配合大量实例,兼具参考书和实战指南,值得广大读者仔细研读。
-----------------------王天青 DaoCloud首席架构师

Spring Data Jpa是一个非常出色的数据访问封装,可以极大的简化开发人员对数据库的操作编码,但是该框架在国内的应用并不多,主要由于该框架对于初学者来说的确简化了数据访问的开发,但是由于隔离了很多访问细节,对于各种复杂的查询如何使用会有一些学习成本,同时对于性能的把握也需要更深入的了解其底层原理才能真正的用好它。本书细致的介绍了Spring Data Jpa在各种场景下的使用方式,因此推荐给对此感兴趣的读者们。
-------------------翟永超、《Spring Cloud微服务实战》作者、spring4all.com发起人

Spring Data是一个伟大的项目,它为数据访问提供了一致、相对简单的编程模型,并且可用来操作几乎所有的主流存储。
Spring Data JPA是Spring Data的核心子项目之一。本书由浅入深,讲解了Spring Data JPA的常用功能与API,并结合实际工作中的场景,讲解如何扩展、如何避免踩坑等。你,值得拥有。
-------------------——《Spring Cloud与Docker微服务架构实战》作者 周立

随着微服务的流行,Spring Boot 与 Spring Cloud被广泛使用。Spring Data JPA 简化数据库的操作,本书作者从最简单的开始到复杂应用,娓娓道来,填补相关领域空白。
-------------------一号店CTO 韩军/Jason

目录:

第一部分 基础部分
第1章 整体认识JPA 3
1.1 市场上ORM框架比对 3
1.2 JPA的介绍以及哪些开源实现 4
1.3 了解Spring Data 5
1.3.1 Spring Data介绍 5
1.3.2 Spring Data的子项目有哪些: 5
1.3.3 Spring Data操作的主要特性 6
1.4 Spring Data Jpa的主要的类及结构图 7
1.5 Mysql的快速开始实例 8
第2章 Jpa基础查询方法 13
2.1 Spring Data Common的Repository 14
2.2 Repository的类层次关系 (diagms/hierarchy/structure) 14
2.3 CrudRepository方法详解 17
2.3.1 CrudRepository interface内容 17
2.3.2 CrudRepository interface的使用案例 18
2.4 PagingAndSortingRepository方法详解 19
2.4.1 PagingAndSortingRepository interface 内容 20
2.4.2 PagingAndSortingRepository使用案例 20
2.5 JpaRepository方法详解 21
2.5.1 JpaRepository详解 21
2.5.2 JpaRepository使用方法也一样,只需要继承它即可。如下面的例子: 22
2.6 Repository的实现类SimpleJpaRepository 22
第3章 定义查询方法 (Defining Query Methods) 24
3.1 定义查询方法的配置方法 24
3.2 方法的查询策略设置 25
3.3 查询方法的创建 26
3.4 关键字列表 27
3.5 方法的查询策略的属性表达式 (Property Expressions) 29
3.6 查询结果的处理 29
3.6.1 参数选择(Sort/Pageable)分页和排序 29
3.6.2 查询结果的不同形式(List/Stream/Page/Future) 30
3.6.3 Projections对查询结果的扩展 31
3.7 0实现机制介绍 34
第4章 注解式查询方法 36
4.1 @query详解 36
4.1.1 先看一下语法及其源码: 36
4.1.2 @query用法 37
4.1.3 @query的排序 38
4.1.4 @query的分页 39
4.2 @param用法 39
4.3 Spel表达式的支持 40
4.4 @Modifying修改查询 41
4.5 @QueryHints 42
4.6 @procedure储存过程的查询方法 43
4.7 @NamedQueries预定义查询 45
4.7.1 这种是预定义查询的一种形式。 45
4.7.2 用法举例: 45
4.7.3 @NamedQuery和@query、方法定义查询三者对比。 46
第5章 @entity实例里面常用注解详解 47
5.1 javax.persistence概况介绍 47
5.2 基本注解 @entity@table@id、@GeneratedValue、@basic@column@transient@lob@TeMPOraL 50
5.2.1 先看一个Blog的案例其中实体的配置如下: 50
5.2.2 @entity 定义对象将会成为被JPA管理的实体,将映射到指定的数据库表。 51
5.2.3 @table 指定数据库的表名 51
5.2.4 @id 定义属性为数据库的主键,一个实体里面必须有一个。 51
5.2.5 @IdClass 利用外部类的联合主键。 51
5.2.6 @GeneratedValue主键生成策略 53
5.2.7 @basic 表示属性是到数据库表的字段的映射。如果实体的字段上没有任何注解,默认即为@basic。 53
5.2.8 @transient 表示该属性并非一个到数据库表的字段的映射,表示非持久化属性。JPA映射数据库的时候忽略它。与@basic相反的作用。 53
5.2.9 @column 定义该属性对应数据库中的列名。 53
5.2.10 @TeMPOraL 用来设置Date类型的属性映射到对应精度的字段。 54
5.2.11 @Enumerated这个注解很好用,直接映射enum枚举类型的字段。 54
5.2.11 @lob 将属性映射成数据库支持的大对象类型。支持以下两种数据库类型的字段。 55
5.2.12 @SqlResultSetMapping @EntityResult @ColumnResult 配合@NamedNativeQuery一起使用的。实际工作中不建议这样配置。 55
5.3 关联关系注解 @OnetoOne@joincolumn@manytoone@manytomany@jointable@orderby 56
5.3.1 @joincolumn 定义外键关联的字段名称 56
5.3.2 @OnetoOne 一对一关联关系 56
5.3.3 @OneToMany一对多&@manytoone多对一 58
5.3.4 @orderby关联查询的时候的排序 58
5.3.5 @jointable关联关系表 59
5.3.6 @manytomany多对多 60
5.4 Left join与Inner join与@entitygraph 61
5.4.1 Left join&Inner join问题 61
5.4.2 @entitygraph 62
5.5 工作中关于关系查询踩过的那些坑 62
第二部分 晋级之高级部分
第6章 JpaRepository扩展详解 66
6.1 JpaRepository介绍 66
6.2 QueryByExampleExecutor的使用 67
6.2.1 QueryByExampleExecutor详细配置 67
6.2.2 QueryByExampleExecutor的使用案例 68
6.2.3 QueryByExampleExecutor的特点及约束 69
6.2.4 ExampleMatcher详解: 69
6.2.5 QueryByExampleExecutor使用场景&实际的使用 71
6.2.6 QueryByExampleExecutor的原理 74
6.3 JpaSpecificationExecutor的详细使用 75
6.3.1 JpaSpecificationExecutor的使用方法 75
6.3.2 Criteria的概念简单介绍 76
6.3.3 JpaSpecificationExecutor案例 77
6.3.4 Specification工作中的一些扩展 79
6.3.5 JpaSpecificationExecutor实现原理 80
6.4 自定义Repository 81
6.4.1 EntityManager介绍 81
6.4.2 自定义实现Repository 83
6.4.3 实际工作的应用场景 84
第7章 Spring Data Jpa的扩展 87
7.1 Auditing及其事件详解 87
7.1.1 Auditing如何配置 88
7.2.2 @MappedSuperclass 90
7.1.3 Auditing原理解析 91
7.1.4 Listener事件的扩展 93
7.2 @Version处理乐观锁的问题 94
7.3 对MvcWeb的支持 97
7.3.1 @EnableSpringDataWebSupport 97
7.3.2 DomainClassConverter组件 97
7.3.3 HandlerMethodArgumentResolvers可分页和排序 98
7.3.4 @PageableDefault 改变默认的page和size。 100
7.3.5 Page原理解析: 100
7.4 @EnableJpaRepositories详解 102
7.4.1 Spring Data Jpa加载repositories配置简介 102
7.4.2 @EnableJpaRepositories详解 103
7.4.3 JpaRepositoriesAutoConfiguration源码解析 105
7.5 默认日志简单介绍 106
7.6 Spring Boot Jpa的版本问题 109
第8章 DataSource的配置 111
8.1 默认数据源的讲解 111
8.1.1 我们通过三种方法来查看我们默认的DataSource是什么。 111
8.1.2 我们看来下我们的datasource和jpa都有哪些配置属性。 114
8.1.3 JpaBaseConfiguration 116
8.1.4 Configuration思路 117
8.2 AliDruidDataSource的配置 118
8.3 事务的处理及其讲解 121
8.3.1 默认@transactional注解式事务 121
8.3.2 声明式事务,又叫隐式事务,或者叫ASPECTJ事务 125
8.4 如何配置多数据源 126
8.4.1 在application.properties中定义两个DataSource 126
8.4.2 定义两个DataSourceConfigJava类 127
8.5 Naming命名策略详解及其实践 129
8.5.1 Naming命名策略详解 130
8.5.2 实际工作中的一些扩展 132
8.6 完整的传统xml的配置方法 133
第三部分 延展部分
第9章 Intellij Idea与Spring Jpa 138
9.1 Intellij Idea的大概介绍 138
9.2 DataBase插件 139
9.3 Persistence及其JPA相关的插件介绍 143
9.4 Intellij Idea分析源码用到的几个视图 148
第10章 Spring Data Redis详解 151
10.1 Redis 之 Jedis 的使用 151
10.2 Spring Boot + spring data redis 配置 158
10.3 Spring Data Redis结合Spring Cache 配置方法 165
10.3.1 Spring Cache 介绍 165
10.3.2 Spring Boot快速开始Demo 169
10.3.3 Spring Boot Cache 实现过程解析 170
10.3.3 Cache 和 Spring Data Redis 结合 172
第11章 SpEL表达式讲解 182
11.1 SpEL介绍 182
11.1.1 SpEL主要特点 183
11.1.2 使用方法 183
11.2 SpEL的基础语法 184
9.2.1 逻辑运算操作 185
11.2.2 逻辑关系比较 186
11.2.3 逻辑关系 187
11.2.5 正则表达式的支持 188
11.2.6 Bean的引用 188
11.2.7 List 和 Map 的操作 189
11.3 主要的几个类介绍及其原理 190
11.3.1 ExpressionParser 190
11.3.2 root object 191
11.3.3 EvaluationContext 192
11.3.4 SpelParserConfiguration 编译器配置 192
11.3.5 表达式模板设置 194
11.3.6 主要类关系图 195
11.3.7 SpEL支持以下的一些特性: 195
11.4 在spring的主要使用场景 196
11.4.1 Spring Data JPA中SpEL支持 196
11.4.2 Spring Cachae 197
11.4.3 @value 197
11.4.4 web 验证应用场景 198
11.4.5 总结 198
第12章 Spring Data REST 199
12.1 快速入门 199
12.1.1 Spring Data Rest介绍 199
12.1.2 快速开始 201
12.1.3 Repository资源接口介绍 208
12.2 Spring Data Rest定制化 209
12.2.1 @RepositoryRestResource 改变***Repository对应的Path路径和资源名字。 209
12.2.2 @RestResource 改变SearchPath 210
12.2.3 改变返回结果 211
12.2.4 隐藏某些Repository,Repository的查询方法或@entity关系字段 212
12.2.5 隐藏Repository的CRUD方法 212
12.2.6 自定义JSON输出 212
12.3 Spring Boot 2.0加载其原理 213

12.4 未来发展 214

JPA Attribute Converters

比如一个数据是 1 ,数据库里面通过加密保存的是aaa,我现在查询的时候触发了convertToEntityAttribute 解密成了 1,我现在需要更改一下加密方式,把他改成bbb,我想在convertToDatabaseColumn修改一下,如果是1 保存的时候映射到bbb。但是现在查出来已经是1了,但是保存的时候因为没有变更或者是别的什么原因,没有触发convertToDatabaseColumn。有的表通过更新version或者update time就可以触发,但是有一张表在设计的时候没有这些

see: https://www.baeldung.com/jpa-attribute-converters

@Entity(name = "PersonTable")
public class Person {

    @Convert(converter = PersonNameConverter.class)
    private PersonName personName;
    
    // ...
}

@Converter
public class PersonNameConverter implements 
  AttributeConverter<PersonName, String> {

    private static final String SEPARATOR = ", ";

    @Override
    public String convertToDatabaseColumn(PersonName personName) {
        if (personName == null) {
            return null;
        }

        StringBuilder sb = new StringBuilder();
        if (personName.getSurname() != null && !personName.getSurname()
            .isEmpty()) {
            sb.append(personName.getSurname());
            sb.append(SEPARATOR);
        }

        if (personName.getName() != null 
          && !personName.getName().isEmpty()) {
            sb.append(personName.getName());
        }

        return sb.toString();
    }

    @Override
    public PersonName convertToEntityAttribute(String dbPersonName) {
        if (dbPersonName == null || dbPersonName.isEmpty()) {
            return null;
        }

        String[] pieces = dbPersonName.split(SEPARATOR);

        if (pieces == null || pieces.length == 0) {
            return null;
        }

        PersonName personName = new PersonName();        
        String firstPiece = !pieces[0].isEmpty() ? pieces[0] : null;
        if (dbPersonName.contains(SEPARATOR)) {
            personName.setSurname(firstPiece);

            if (pieces.length >= 2 && pieces[1] != null 
              && !pieces[1].isEmpty()) {
                personName.setName(pieces[1]);
            }
        } else {
            personName.setName(firstPiece);
        }

        return personName;
    }
}

大字段类型

5.2.11 @lob 将属性映射成数据库支持的大对象类型。支持以下两种数据库类型的字段。

  1. Clob(Character Large Ojects)类型是长字符串类型,java.sql.Clob、Character[]、char[] 和 String 将被映射为 Clob 类型。
  2. Blob(Binary Large Objects)类型是字节类型,java.sql.Blob、Byte[]、byte[] 和 实现了Serializable接口的类型将被映射为 Blob 类型。
  3. 由于Clob,Blob占用内存空间较大一般配合@basic(fetch=FetchType.LAZY)将其设置为延迟加载
// clob
    @Lob
    @Column(columnDefinition = "Clob")
    public String getTestTxt() {
        return testTxt;
    }
//blob
    @Lob
    @Column(columnDefinition = "Blob")
    public Byte[] getTestTxt() {
        return testTxt;
    }
//text
    @Lob
    @Column(columnDefinition = "TEXT")
    public String getTestTxt() {
        return testTxt;
    }
// TEXT 类型的 (columnDefinition = "TEXT") 可以指定也可以不指定

编译错误

com.example.example2.entity.QUser
找不到。
为何,git clone 后,居然还有编译错误?
您能,亲自git clone一下,再打开看一下?

网友问题详解

作者你好,我看了你的书,有几个问题,想咨询一下

1、Spring Jpa data中的配置比较复杂,尤其是关联关系,容易出错。有没有工具可以自动生成这些配置。在网上看到的,大部分是Mybatis的自动生成工具,也有少量的jpa的自动生成工具,但是都没有针对配置关系的生成工具?
有:IDEA就可以做到;参考: https://gitbook.cn/new/gitchat/activity/5a5405edf6e6d01dea2d5e23
2、Spring Data Jpa 与JPA+hibernate 现在哪个更容易在项目中使用?
我们公司所有项目都是JPA,没有什么区分,就看你掌握的熟悉不熟悉;
3、事务的传播,如果A,与B的事务配置属性不一样,那么传播的会选择事务的并集合还是交集还是忽略后面的事务?
这看你的事务的传播机制怎么设置的呀,建议详细了解一下spring的事务7种传播机制;
4、在一个方法中,Mybatis与Spring Jpa data共用的时候,会采用哪个事务?JPA的一级缓存是否要手动flush?
1)建议不要混用,都微服务化了,为什么要混用?
2)即使混用,事务时spring管理的和Mybatis和JPA有什么关系呢?看你的每个方法上的事务传播机制的设置有关系呀;
3)个人认为不需要;原理可以仔细看下老师拉钩上的文章(21章和28章):https://kaiwu.lagou.com/course/courseInfo.htm?courseId=490&sid=20-h5Url-0&buyFrom=2&pageId=1pz4#/detail/pc?id=4721
5、如果在企业的信息管理系统中,数据表有100多张。如发票到账,分包等。这些表之间有关系,如果在对象层面也建立这些对象的双向关系,是否会导致在flush的时候,冲突的概念会很高,因为每次flush的时候,要对比所有关联的对象的状态变化,然后可能会更新多个表?
1)如果早期对关联关系不熟悉,建议不要建关联关系,和mybatis的思路一样,就是普通的单表对象,自己做关联查询;
2)即使配置了关联关系,也可以通过设置CascadeType不做级联更新;详细推荐看老师拉钩上的第07章:https://kaiwu.lagou.com/course/courseInfo.htm?courseId=490&sid=20-h5Url-0&buyFrom=2&pageId=1pz4#/detail/pc?id=4707
6、针对历史库的脏数据检查
有一些历史库,因为没有建立数据库层面的约束关系,由应用程序来维护数据的约束关系。性能上提高了,但是数据一致性会存在问题。(ps:有什么关系呢?)
现在希望通过JPA建立关系,通过一系列规则类。(这些规则类,可采用策略模式或责任链模式进行组合),来找出脏数据具体的条目。
如,在子表中,逻辑外键为null。或子表中逻辑外键,不在主表中的等。
针对这个规则,读数据库中的表,然后运行这个规则进行检测。

   JPA有这样的应用场景吗?

无,存的业务逻辑呀,和JPA有什么关系呢?

7、针对Hibernate材料比jpa多,JPA的资料比SpringData Jpa多。
Spring Jpa Data 的的官方材料在什么地方,没有找到合适的
看老师的github首页,资料都放上面了:https://github.com/zhangzhenhuajack/spring-data-jpa-guide

作者你好,我看了你的书,有几个问题,想咨询一下,我补充一下昨天的内容

1、JPA的适用场景
如果在企业的信息管理系统中,数据表有100多张。如项目,合同,发票到账,分包等。这些表之间有关系,如果在对象层面也建立这些对象的双向关系,是否会导致在flush的时候,冲突的概率会很高,因为每次flush的时候,要对比所有关联的对象的状态变化,然后可能会更新多个表?
否则会出现性能问题
所以,JPA适合与多大规模的数据表关系 或对象关系?企业的ERP系统适合用JPA的技术吗,尤其它的复杂的关联关系?
如果在对象层面,有100多个对象之间有关系,是否导致JPA的使用会出现性能,冲突等问题。
没有适合与不适合只说,只看你用的对不对。把100多个对象都建立其关联关系,老师看到这样的代码其实有点打人,维护性太差。
2、事务
2.1、在Java程序代码中,JDBC连接数据库的时候,会有BeginTranaction,这个动作,是否仅获取数据库连接及事务属性。只有在Commit的时候,才锁定数据库中表记录或行记录?
flush的时候就会锁定数据,不是在commit的时候。详见老师拉钩上文章介绍吧;19章 ;
2.2、事务的传播,如果A,与B的事务配置属性不一样,那么传播的会选择事务的并集合还是交集还是忽略后面的事务?
重复
2.3、在一个方法中,Mybatis与Spring Jpa data共用的时候,会采用哪个事务?JPA的一级缓存是否要手动flush?
重复
3、最新版本的JPA,SpringBootJPAData英文版本链接网址?
哥,这个问题有点搓呀,官方文档都找不到吗?https://docs.spring.io/spring-data/jpa/docs/2.5.1/reference/html/
4、JPA一级缓存如果在事务隔离的设置中,如果采用Repeatable隔离级别,一级缓存的数据,是否会不是最新的? 因为其他事务可能更新数据 但是该Session并不知道。换句话说,一级缓存在什么时候会更新它的缓存? merge或者 find的时候。
想看老师拉钩文章21和22章;
5、在程序设计中,推荐采用显式方式进行persist操作,还是在事务commit的时候,由缓存 Dirty 机制来进行直接更新?
推荐jpa自己的机制。save够用了;
6、N+1的问题,是否要通过检测sql的输出来进行观察,在设计阶段,根据规则,是完全可以避免的吗?
完全可以。见25,26章:https://kaiwu.lagou.com/course/courseInfo.htm?courseId=490&sid=20-h5Url-0&buyFrom=2&pageId=1pz4#/detail/pc?id=4725
7、我们想用SpringDataJPA来实现前后端分离的快速开发。当前的思路如下
(1) 设计领域对象
(2) 构建对象关系
(3) 自动产生数据表
(4) 代码工具自动生成前端,后端代码(如Jeecg等,有权限,报表等)
(5) 补充一下前端页面
(6) 增加一些查询的类,提供给前端作为数据组合使用
这种思路您感觉可行吗?
可以呀,推荐熟悉一下jaeger/jsonapi;或者spring data rest;
8、Spring Jpa data中的配置比较复杂,尤其是关联关系,容易出错。有没有工具可以自动生成这些配置。在网上看到的,大部分是Mybatis的自动生成工具,也有少量的jpa的自动生成工具,但是都没有针对配置关系的生成工具?
重复
9、Spring Data Jpa 与JPA+hibernate 现在哪个更容易在项目中使用?
哥,JPA就是由hibernate实现的。看场景自由选择;
10、针对历史库的脏数据检查
有一些历史库,因为没有建立数据库层面的约束关系,由应用程序来维护数据的约束关系。性能上提高了,但是数据一致性会存在问题。
现在希望通过JPA建立关系,通过一系列规则类。(这些规则类,可采用策略模式或责任链模式进行组合),来找出脏数据具体的条目。
如,在子表中,逻辑外键为null。或子表中逻辑外键,不在主表中的等。
针对这个规则,读数据库中的表,然后运行这个规则进行检测。

JPA有这样的应用场景吗?
重复
11、数据库的持续集成的问题,有无合适的框架或方法论
有一个概念叫:java db migration ;不知道你说的是不是这个,推荐:flywaydb,gradle有插件;另外gitlab+ci/cd也可以做到;

Spring data JPA的hibernate中@OneToOne注解引起的N+1的sql问题如何解决

实验Entity代码如下:

@Setter
@Getter
@EqualsAndHashCode(of = {"id"}, callSuper = true)
@Entity
@Table(name = "tpusers")
public class Tpuser e {
    private static final long serialVersionUID = 2299533081180191108L;
    private String name;
    private Integer gender;
    private String email;
    private Parent parent;
    private Teacher teacher;
    private List<ThirdPartyTpuser> thirdPartyTpusers;
    private Long id;

    @Id
    @Override
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    public Long getId() {
        return this.id;
    }

    @OneToMany(fetch = FetchType.LAZY, mappedBy = "tpuser")
    public List<ThirdPartyTpuser> getThirdPartyTpusers() {
        return thirdPartyTpusers;
    }

    @OneToOne(mappedBy = "tpuser",fetch = FetchType.LAZY)// 在JPA里面 @OneToOne默认情况下 是EAGER模式,配置成lazy也是不起作用的
//    @LazyToOne(LazyToOneOption.NO_PROXY)//这个是字节码增强解决@OneToOne lazy的问题的,会产生N+1问题。
//    @Fetch(FetchMode.JOIN) //这个开不开是一样的效果
    public Parent getParent() {
        return parent;
    }

    @OneToOne(mappedBy = "tpuser")
  //    @LazyToOne(LazyToOneOption.NO_PROXY)//这个是字节码增强解决@OneToOne lazy的问题的,会产生N+1问题。
  //    @Fetch(FetchMode.JOIN)
    public Teacher getTeacher() {
        return teacher;
    }
}
@Getter
@Setter
@EqualsAndHashCode(of = "tpuser_id")
@Entity
@Table(name = "teachers")
@Include(rootLevel = true, type = "Teachers")
@Where(clause = "deleted = false")
public class Teacher {
    private Long areaId;
    private String address;
    private TeacherType type;
    private Tpuser tpuser;
    private Long id;

    @Id
    @Override
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    public Long getId() {
        return this.id;
    }
    @Enumerated(EnumType.STRING)
    public TeacherType getType() {
        return type;
    }

//    @LazyToOne(LazyToOneOption.NO_PROXY)//这个是字节码增强解决@OneToOne lazy的问题的,会产生N+1问题。
    @OneToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "tpuser_id")
    public Tpuser getTpuser() {
        return tpuser;
    }
}
@Getter
@Setter
@Entity
@Table(name = "parents")
@Include(rootLevel = true, type = "Parents")
@EntityListeners(EntityChangeListener.class)
@Where(clause = "deleted = false")
public class Parent extends AbstractDeletedAuditBase {
    private String address;
    private Tpuser tpuser;
    private Long id;

    @Id
    @Override
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    public Long getId() {
        return this.id;
    }

//    @LazyToOne(LazyToOneOption.NO_PROXY) //这个是字节码增强解决@OneToOne lazy的问题的,会产生N+1问题。
    @OneToOne(fetch = FetchType.LAZY)
    @JoinColumn(name = "tpuser_id", referencedColumnName = "id")
    public Tpuser getTpuser() {
        return tpuser;
    }
}

最终@OnetoOne执行完之后打印的sql如下:

2021-09-15 18:13:44.052 DEBUG [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] org.hibernate.SQL                        : select tpuser0_.id as id1_24_0_, parent1_.id as id1_13_1_, teacher2_.id as id1_15_2_, tpuser0_.created_at as created_2_24_0_, tpuser0_.updated_at as updated_3_24_0_, tpuser0_.lock_version as lock_ver4_24_0_, tpuser0_.auto_generate as auto_gen5_24_0_, tpuser0_.email as email6_24_0_, tpuser0_.gender as gender7_24_0_, tpuser0_.invitation_code_group as invitati8_24_0_, tpuser0_.invited_by_code as invited_9_24_0_, tpuser0_.mobile_phone as mobile_10_24_0_, tpuser0_.mobile_phone_validated as mobile_11_24_0_, tpuser0_.name as name12_24_0_, tpuser0_.password_hash as passwor13_24_0_, tpuser0_.password_updated_at as passwor14_24_0_, tpuser0_.state as state15_24_0_, tpuser0_.uuid as uuid16_24_0_, parent1_.created_at as created_2_13_1_, parent1_.updated_at as updated_3_13_1_, parent1_.lock_version as lock_ver4_13_1_, parent1_.deleted as deleted5_13_1_, parent1_.deleted_at as deleted_6_13_1_, parent1_.address as address7_13_1_, parent1_.state as state8_13_1_, parent1_.tpuser_id as tpuser_i9_13_1_, teacher2_.created_at as created_2_15_2_, teacher2_.updated_at as updated_3_15_2_, teacher2_.lock_version as lock_ver4_15_2_, teacher2_.deleted as deleted5_15_2_, teacher2_.deleted_at as deleted_6_15_2_, teacher2_.address as address7_15_2_, teacher2_.area_id as area_id8_15_2_, teacher2_.last_login_date as last_log9_15_2_, teacher2_.state as state10_15_2_, teacher2_.tpuser_id as tpuser_12_15_2_, teacher2_.type as type11_15_2_ from tpusers tpuser0_ left outer join parents parent1_ on tpuser0_.id=parent1_.tpuser_id and ( parent1_.deleted = 0) left outer join teachers teacher2_ on tpuser0_.id=teacher2_.tpuser_id and ( teacher2_.deleted = 0) where tpuser0_.id in (? , ?) limit ?
2021-09-15 18:13:44.102 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicBinder      : binding parameter [1] as [BIGINT] - [88420]
2021-09-15 18:13:44.103 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicBinder      : binding parameter [2] as [BIGINT] - [88421]
2021-09-15 18:13:44.148 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([id1_24_0_] : [BIGINT]) - [88420]
2021-09-15 18:13:44.149 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([id1_13_1_] : [BIGINT]) - [151600]
2021-09-15 18:13:44.149 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([id1_15_2_] : [BIGINT]) - [110169]
2021-09-15 18:13:44.160 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([created_2_24_0_] : [TIMESTAMP]) - [2015-03-13T09:17:21Z]
2021-09-15 18:13:44.161 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([updated_3_24_0_] : [TIMESTAMP]) - [2019-12-05T05:43:17Z]
2021-09-15 18:13:44.162 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([lock_ver4_24_0_] : [INTEGER]) - [14]
2021-09-15 18:13:44.163 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([auto_gen5_24_0_] : [BOOLEAN]) - [false]
2021-09-15 18:13:44.164 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([email6_24_0_] : [VARCHAR]) - [[email protected]]
2021-09-15 18:13:44.165 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([gender7_24_0_] : [INTEGER]) - [1]
2021-09-15 18:13:44.182 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([invitati8_24_0_] : [VARCHAR]) - []
2021-09-15 18:13:44.182 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([invited_9_24_0_] : [VARCHAR]) - []
2021-09-15 18:13:44.182 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([mobile_10_24_0_] : [VARCHAR]) - [10000088420]
2021-09-15 18:13:44.183 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([mobile_11_24_0_] : [BOOLEAN]) - [true]
2021-09-15 18:13:44.183 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([name12_24_0_] : [VARCHAR]) - [Brian aaa]
2021-09-15 18:13:44.186 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([passwor13_24_0_] : [VARCHAR]) - [8ffff2012f629266b004b54886de92f91636af28c48d04806bc3cf3e4c359ccf]
2021-09-15 18:13:44.187 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([passwor14_24_0_] : [TIMESTAMP]) - [2019-12-05T05:43:17Z]
2021-09-15 18:13:44.187 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([state15_24_0_] : [INTEGER]) - [null]
2021-09-15 18:13:44.190 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([uuid16_24_0_] : [VARCHAR]) - [d86970d0-0426-0136-35da-0c4de9bf6bbe]
2021-09-15 18:13:44.195 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([created_2_13_1_] : [TIMESTAMP]) - [2015-03-13T09:18:05Z]
2021-09-15 18:13:44.196 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([updated_3_13_1_] : [TIMESTAMP]) - [2017-07-21T06:23:12Z]
2021-09-15 18:13:44.197 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([lock_ver4_13_1_] : [INTEGER]) - [25]
2021-09-15 18:13:44.197 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([deleted5_13_1_] : [BOOLEAN]) - [false]
2021-09-15 18:13:44.197 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([deleted_6_13_1_] : [TIMESTAMP]) - [1969-12-31T16:00:00Z]
2021-09-15 18:13:44.198 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([address7_13_1_] : [VARCHAR]) - [null]
2021-09-15 18:13:44.198 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([state8_13_1_] : [BIGINT]) - [1]
2021-09-15 18:13:44.198 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([tpuser_i9_13_1_] : [BIGINT]) - [88420]
2021-09-15 18:13:44.202 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([created_2_15_2_] : [TIMESTAMP]) - [2015-03-13T09:17:21Z]
2021-09-15 18:13:44.203 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([updated_3_15_2_] : [TIMESTAMP]) - [2015-03-13T09:19:50Z]
2021-09-15 18:13:44.204 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([lock_ver4_15_2_] : [INTEGER]) - [1]
2021-09-15 18:13:44.204 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([deleted5_15_2_] : [BOOLEAN]) - [false]
2021-09-15 18:13:44.205 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([deleted_6_15_2_] : [TIMESTAMP]) - [1969-12-31T16:00:00Z]
2021-09-15 18:13:44.205 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([address7_15_2_] : [VARCHAR]) - [null]
2021-09-15 18:13:44.206 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([area_id8_15_2_] : [BIGINT]) - [807]
2021-09-15 18:13:44.206 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([last_log9_15_2_] : [TIMESTAMP]) - [null]
2021-09-15 18:13:44.207 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([state10_15_2_] : [BIGINT]) - [1]
2021-09-15 18:13:44.207 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([tpuser_12_15_2_] : [BIGINT]) - [88420]
2021-09-15 18:13:44.208 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([id1_24_0_] : [BIGINT]) - [88421]
2021-09-15 18:13:44.209 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([id1_13_1_] : [BIGINT]) - [151601]
2021-09-15 18:13:44.209 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([id1_15_2_] : [BIGINT]) - [127175]
2021-09-15 18:13:44.210 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([created_2_24_0_] : [TIMESTAMP]) - [2015-03-13T09:18:12Z]
2021-09-15 18:13:44.212 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([updated_3_24_0_] : [TIMESTAMP]) - [2018-03-07T11:16:02Z]
2021-09-15 18:13:44.215 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([lock_ver4_24_0_] : [INTEGER]) - [3]
2021-09-15 18:13:44.216 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([auto_gen5_24_0_] : [BOOLEAN]) - [true]
2021-09-15 18:13:44.216 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([email6_24_0_] : [VARCHAR]) - [null]
2021-09-15 18:13:44.217 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([gender7_24_0_] : [INTEGER]) - [1]
2021-09-15 18:13:44.217 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([invitati8_24_0_] : [VARCHAR]) - []
2021-09-15 18:13:44.218 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([invited_9_24_0_] : [VARCHAR]) - []
2021-09-15 18:13:44.218 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([mobile_10_24_0_] : [VARCHAR]) - [10000088421]
2021-09-15 18:13:44.218 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([mobile_11_24_0_] : [BOOLEAN]) - [true]
2021-09-15 18:13:44.219 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([name12_24_0_] : [VARCHAR]) - [独径深幽的家长]
2021-09-15 18:13:44.219 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([passwor13_24_0_] : [VARCHAR]) - [8ed9074f0c0883c64830b82335c9d884d17665202f8fc45d1bb74394cbfe5d3b]
2021-09-15 18:13:44.220 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([passwor14_24_0_] : [TIMESTAMP]) - [2015-08-29T10:23:11Z]
2021-09-15 18:13:44.220 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([state15_24_0_] : [INTEGER]) - [null]
2021-09-15 18:13:44.220 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([uuid16_24_0_] : [VARCHAR]) - [d869f650-0426-0136-35da-0c4de9bf6bbe]
2021-09-15 18:13:44.221 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([created_2_13_1_] : [TIMESTAMP]) - [2015-03-13T09:18:12Z]
2021-09-15 18:13:44.222 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([updated_3_13_1_] : [TIMESTAMP]) - [2015-03-13T09:18:12Z]
2021-09-15 18:13:44.222 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([lock_ver4_13_1_] : [INTEGER]) - [0]
2021-09-15 18:13:44.224 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([deleted5_13_1_] : [BOOLEAN]) - [false]
2021-09-15 18:13:44.225 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([deleted_6_13_1_] : [TIMESTAMP]) - [1969-12-31T16:00:00Z]
2021-09-15 18:13:44.225 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([address7_13_1_] : [VARCHAR]) - [null]
2021-09-15 18:13:44.225 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([state8_13_1_] : [BIGINT]) - [null]
2021-09-15 18:13:44.225 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([tpuser_i9_13_1_] : [BIGINT]) - [88421]
2021-09-15 18:13:44.226 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([created_2_15_2_] : [TIMESTAMP]) - [2015-09-02T12:31:17Z]
2021-09-15 18:13:44.227 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([updated_3_15_2_] : [TIMESTAMP]) - [2015-09-02T12:31:17Z]
2021-09-15 18:13:44.227 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([lock_ver4_15_2_] : [INTEGER]) - [0]
2021-09-15 18:13:44.228 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([deleted5_15_2_] : [BOOLEAN]) - [false]
2021-09-15 18:13:44.228 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([deleted_6_15_2_] : [TIMESTAMP]) - [1969-12-31T16:00:00Z]
2021-09-15 18:13:44.229 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([address7_15_2_] : [VARCHAR]) - [null]
2021-09-15 18:13:44.229 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([area_id8_15_2_] : [BIGINT]) - [null]
2021-09-15 18:13:44.229 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([last_log9_15_2_] : [TIMESTAMP]) - [null]
2021-09-15 18:13:44.229 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([state10_15_2_] : [BIGINT]) - [null]
2021-09-15 18:13:44.230 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicExtractor   : extracted value ([tpuser_12_15_2_] : [BIGINT]) - [88421]
2021-09-15 18:13:44.599 DEBUG [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] org.hibernate.SQL                        : select thirdparty0_.uid as uid15_18_1_, thirdparty0_.id as id1_18_1_, thirdparty0_.id as id1_18_0_, thirdparty0_.created_at as created_2_18_0_, thirdparty0_.updated_at as updated_3_18_0_, thirdparty0_.lock_version as lock_ver4_18_0_, thirdparty0_.avatar_url as avatar_u5_18_0_, thirdparty0_.city as city6_18_0_, thirdparty0_.country as country7_18_0_, thirdparty0_.nickname as nickname8_18_0_, thirdparty0_.openid as openid9_18_0_, thirdparty0_.platform as platfor10_18_0_, thirdparty0_.province as provinc11_18_0_, thirdparty0_.sex as sex12_18_0_, thirdparty0_.uid as uid15_18_0_, thirdparty0_.unionid as unionid13_18_0_, thirdparty0_.uuid as uuid14_18_0_ from third_party_tpusers thirdparty0_ where thirdparty0_.uid in (?, ?)
2021-09-15 18:13:44.599 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicBinder      : binding parameter [1] as [BIGINT] - [88420]
2021-09-15 18:13:44.599 TRACE [-,1c8ce272d06f45bc,1c8ce272d06f45bc,true] 2609 --- [nio-9000-exec-2] o.h.type.descriptor.sql.BasicBinder      : binding parameter [2] as [BIGINT] - [88421]

打印日志的方法:

logging.level.org.hibernate.SQL=DEBUG
logging.level.org.hibernate.type.descriptor.sql=trace

需要注意的是 三个实体里面 NO_PROXY都需要去掉,否则就会产生N+1问题。

@LazyToOne(LazyToOneOption.NO_PROXY)

entityName动态entity的使用例子

公共GenericTokenRepository

import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Modifying;
import org.springframework.data.jpa.repository.Query;
import org.springframework.data.repository.NoRepositoryBean;
import org.springframework.data.repository.query.Param;
import org.springframework.transaction.annotation.Transactional;

import java.time.Instant;
import java.util.List;
import java.util.Optional;

/**
 * GenericTokenRepository 共同接口
 *
 * @param <T> UserToken/TeacherToken  这两个实体公用一个GenericTokenRepository
 */
@NoRepositoryBean
public interface GenericTokenRepository<T extends GenericToken> extends JpaRepository<T, Long> {
    /**
     * 根据 token 查找 UserToken
     *
     * @param token
     * @return
     */
    Optional<T> findByToken(String token);


    /**
     * 查找操作
     * @param userId
     * @param  expiresInBorder
     * @return
     */
    @Query(value = "select t from #{#entityName} t where t.userId = :userId and t.expiresIn > :expiresInBorder and t.deactivatedAt is null")
    List<T> findValidTokens(@Param("userId") Long userId, @Param("expiresInBorder") Long expiresInBorder);


    /**
     * 更新操作
     * @param ids
     * @param reason
     * @param deactivatedAt
     */
    @Modifying
    @Query(value = "update #{#entityName} set deactivationReason = :deactivationReason, deactivatedAt = :deactivatedAt where id in :ids and deactivatedAt is null")
    @Transactional
    void expireTokensWithReason(@Param("ids") Iterable<Long> ids, @Param("deactivationReason") DeactivationReason reason,
                                @Param("deactivatedAt") Instant deactivatedAt);

}

GenericTokenRepository的两个子类

public interface UserTokenRepository extends GenericTokenRepository<UserToken> {
}
public interface TeacherTokenRepository extends GenericTokenRepository<TeacherToken> {
}

GenericToken父类实体

import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.Setter;

import javax.persistence.*;
import java.time.Instant;
import java.util.HashSet;
import java.util.List;
import java.util.Objects;
import java.util.Set;

@Getter
@Setter
@EqualsAndHashCode(of = {"id"}, callSuper = true)
@MappedSuperclass
@EntityListeners(AuditingEntityListener.class)
public abstract class GenericToken {
    @Id
    @Override
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private Instant createdAt;
    private Instant updatedAt;
    private Integer version;
    private Long refreshTokenId;
    private String serviceName;
    private String token;
    private Long expiresIn;
    private Instant deactivatedAt;
    private Long userId;
    private String uuid;
}

TeacherToken 和 UserToken 代表两个表,其中teacher的字段有一个不一样

@Getter
@Setter
@Entity
@Table(name = "tpuser_tokens")
@AttributeOverride(name = "userId", column = @Column(name = "teacher_id"))
public class TeacherToken extends GenericToken{
}
@Getter
@Setter
@Entity
@Table(name = "user_tokens")
public class UserToken extends GenericToken {
}

使用的时候,不同的业务逻辑调用各自的Repository 即可

jackson 关联关系序列化常用的注解,解决死循环,JSON常见的异常问题

这报错就一直报 No serializer found for class org.hibernate.proxy.pojo.bytebuddy.ByteBuddyInterceptor and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain: com.example.demo.User["role"]

解决方法入下:

1. spring.jackson.serialization.FAIL_ON_EMPTY_BEANS=false
2. 配置ObjectMapper
@Bean
public MappingJackson2HttpMessageConverter mappingJackson2HttpMessageConverter() {
    ObjectMapper mapper = new ObjectMapper();
    mapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false);
    MappingJackson2HttpMessageConverter converter = 
        new MappingJackson2HttpMessageConverter(mapper);
    return converter;
}

https://stackoverflow.com/questions/28862483/spring-and-jackson-how-to-disable-fail-on-empty-beans-through-responsebody

解决save的时候不产生select语句,直接insert,提高批量操作是提升insert的效率

第一步实体 implements Persistable, Serializable

第二步重写 isNew方法即可;

参考如下实体:

@Builder
@Getter
@Setter
@ToString
@Entity
@NoArgsConstructor
@AllArgsConstructor
@Table(catalog = "user_gold", name = "user_goldjours_shard", indexes = {@Index(unique = true, columnList = "business_code_hash", name = "ux_on_business_code_hash"), @Index(columnList = "uuid,context_type,context_id", name = "idx_on_uuid_and_context_type_and_context_id"), @Index(columnList = "uuid,reason", name = "idx_on_uuid_and_reason")})
@org.hibernate.annotations.Table(appliesTo = "user_goldjours_shard", comment = "用户金贝流水拆分表")//为了给表添加注释
@EntityListeners(AuditingEntityListener.class)
public class UserGoldJoursShard implements Persistable<Long>, Serializable {

    private static final long serialVersionUID = 2225926240419540529L;

    /**
     * 利用hibernate 的 table generate id策略生成id,防止表与表之间ID冲突
     */
    @Column(nullable = false)
    @Id
    private Long id;
    @Column(columnDefinition = "varchar(255) DEFAULT NULL COMMENT '用户的 UUID'")
    private String uuid;
    @Column(columnDefinition = "int(11) DEFAULT NULL COMMENT '变动前数量'")
    private Long currentAmount;
 
    @Version
    private Long version; //必须包含version字段,解决detached entity passed to persist异常
    @Transient
    boolean defaultNew = false;

    @Override
    public boolean isNew() {
        return null == getId() || defaultNew;
    }

    public void setDefaultNew(boolean defaultNew) {
        this.defaultNew = defaultNew;
    }
}

当新增UserGoldJoursShard对象的时候设置defaultNew=true,调用实例如下:

    @Transient
    public UserGoldJoursShard initUserGoldShell() {
        UserGoldJoursShard userGoldShell = new UserGoldJoursShard();
        BeanUtils.copyProperties(this, userGoldShell);
        userGoldShell.setId(this.getId());
        //解决insert之前生成select查询的问题
        userGoldShell.setDefaultNew(true);
        //保留老表ID,防止异常情况需要更新回来
        userGoldShell.setOldTableId(getId());
        return userGoldShell;
    }

@Query中使用接口DTO接收数据库查询结果后,如果需要将接口DTO的属性值都拷贝到VO对象,有什么比较方便的方式?

在使用jpa的过程中遇到了,@query中使用接口DTO接收数据库查询结果后。因为需要添加一些其他的字段,而接口DTO是不能修改的。这时候需要将接口DTO中的属性值拷贝到VO,然后再对VO更新新的属性。原来想使用MapStruct进行属性拷贝,但是测试发现不支持接口。想问下,对于上面的情况有没有比较方便的方式,进行处理。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.