go-gorm / dbresolver Goto Github PK
View Code? Open in Web Editor NEWMultiple databases, read-write splitting FOR GORM
Home Page: https://gorm.io/docs/dbresolver.html
License: MIT License
Multiple databases, read-write splitting FOR GORM
Home Page: https://gorm.io/docs/dbresolver.html
License: MIT License
Is is possible to use gorm preload to preload data from multiple db sources?
Currently It seems that the db connection will use the first source, so the table from the second source can not be reached when preloading.
Support gorm preload when it comes to multiple db sources
add a option, when service operation change to write or read, call use function.
I will be write trace log. I need know service operation current status (write / read).
so if add a hook, I can know service operation current status
I'm currently working on a project where I need to implement a custom policy for DBResolver that takes into account the zone information of each database connection. The goal is to preferentially select a connection from a specific zone when resolving the connection pool.
I've created a custom struct DBWithZone
that embeds *gorm.DB
, includes a Zone
field and implements the gorm.ConnPool
interface. I've also implemented a custom policy NearZonePolicy
that attempts to select a DBWithZone
from the connection pool based on the preferred zone.
However, I've encountered an issue where the gorm.ConnPool
in the Resolve
method of my custom policy is actually of type *sql.DB
, and I can't directly convert it to my DBWithZone
type.
Here's a simplified version of my code:
type DBWithZone struct {
*gorm.DB
Zone string
}
type NearZonePolicy struct {
PreferredZone string
}
func (n *NearZonePolicy) Resolve(connPools []gorm.ConnPool) gorm.ConnPool {
for _, pool := range connPools {
if dbWithZone, ok := pool.(*DBWithZone); ok {
if dbWithZone.Zone == n.PreferredZone {
return dbWithZone.DB
}
}
}
return connPools[0]
}
In the Resolve
method, the type assertion pool.(*DBWithZone)
fails because pool
is of type *sql.DB
.
I'm looking for a way to associate each gorm.ConnPool
(or *sql.DB
) with its corresponding zone information so that I can implement my custom policy. Is there a recommended way to achieve this with GORM DBResolver? Any guidance would be greatly appreciated.
Maybe we need a factory func to let Gorm know how to create a customized gorm.ConnPool
implementation instead of *sql.DB
.
Thank you.
我后面使用TiDB,可直接连接多个地址,所以使用了 dbresolver。
在创建gorm时,发现使用了一个nil的gorm.Dialector。后面db.AutoMigrate 就会读到空指针。
请问这里 gorm.Open 是必须要传入一个正确的链接么?后面不是已经使用了dbresolver了吗?
辛苦解答一下,谢谢。
db, err := gorm.Open(nil, &gorm.Config{})
if err != nil {
return err
}
err = db.Use(dbresolver.Register(dbresolver.Config{Sources: dias, Policy: dbresolver.RandomPolicy{}}))
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x8a46aa]
goroutine 1 [running]:
gorm.io/gorm.(*DB).Migrator(0x40f227?)
/home/sean/code/go/pkg/mod/gorm.io/[email protected]/migrator.go:23 +0xaa
gorm.io/gorm.(*DB).AutoMigrate(0xc000517d40?, {0xc00050fb50, 0x1, 0x1})
/home/sean/code/go/pkg/mod/gorm.io/[email protected]/migrator.go:28 +0x28
gitee.com/pingcap_enterprise/tidb-enterprise-manager/pkg/storage.AutoMigrate(...)
/home/sean/code/src/gitee.com/pingcap_enterprise/tidb-enterprise-manager/pkg/storage/init.go:58
gitee.com/pingcap_enterprise/tidb-enterprise-manager/pkg/storage.InitStorage(0xc00013e130)
/home/sean/code/src/gitee.com/pingcap_enterprise/tidb-enterprise-manager/pkg/storage/init.go:50 +0x2dd
main.initialize()
/home/sean/code/src/gitee.com/pingcap_enterprise/tidb-enterprise-manager/cmd/apiserver/main.go:55 +0x1d0
main.main()
/home/sean/code/src/gitee.com/pingcap_enterprise/tidb-enterprise-manager/cmd/apiserver/main.go:64 +0x1d
exit status 2
# 以下是 dbresolver 包声明的依赖包版本号,请及时更新
require (
gorm.io/driver/mysql v1.1.0 // 目前是 v1.1.1
gorm.io/gorm v1.21.9 // 目前是 v1.21.12
)
Sorry, I tried, but the playground code is way too complicated to understand, I couldn't figure out how to create a test for this case.
Here is a small self-contained test to reproduce:
package main
import (
"fmt"
"log"
"math/rand"
"gorm.io/driver/postgres"
"gorm.io/gorm"
"gorm.io/plugin/dbresolver"
)
type RandomPolicy struct{}
func (RandomPolicy) Resolve(connPools []gorm.ConnPool) gorm.ConnPool {
fmt.Printf("----------- POLICY\n")
return connPools[rand.Intn(len(connPools))]
}
type User struct {
ID string `gorm:"primaryKey"`
}
func (*User) TableName() string { return "users" }
func test() error {
baseDSN := "database=local_db user=root password=root sslmode=disable"
db, err := gorm.Open(postgres.Open(baseDSN+" host=db"), &gorm.Config{})
if err != nil {
return err
}
resolver := dbresolver.Register(dbresolver.Config{
Replicas: []gorm.Dialector{postgres.Open(baseDSN + " host=db-replica-0")},
Policy: RandomPolicy{},
TraceResolverMode: true,
})
if err := db.Use(resolver); err != nil {
return err
}
for i := 0; i < 10; i++ {
_ = db.Session(&gorm.Session{}).Find(&User{}).Error
}
return nil
}
func main() {
if err := test(); err != nil {
log.Fatal(err)
}
}
Result:
[0.798ms] [rows:0] [replica] SELECT * FROM "users"
[0.259ms] [rows:0] [replica] SELECT * FROM "users"
[0.199ms] [rows:0] [replica] SELECT * FROM "users"
[0.205ms] [rows:0] [replica] SELECT * FROM "users"
[0.284ms] [rows:0] [replica] SELECT * FROM "users"
[0.205ms] [rows:0] [replica] SELECT * FROM "users"
[0.187ms] [rows:0] [replica] SELECT * FROM "users"
[0.207ms] [rows:0] [replica] SELECT * FROM "users"
[0.182ms] [rows:0] [replica] SELECT * FROM "users"
[0.177ms] [rows:0] [replica] SELECT * FROM "users"
It always use the replica and the printf in the policy is not there indicating that the provided policy is never called.
The Policy is never called.
The docs (https://gorm.io/docs/dbresolver.html#Load-Balancing) mention that GORM supports load balancing (and uses it by default), however, with or without policy, it always uses the read replica.
Running the provided code shows that the policy is never called.
Removing the policy also results in only the replica being used, no load balancing.
Using db.Clauses(dbresolver.Write)
properly changes the target from replica to source.
Am I missing something or is it an issue with the lib?
Any pointers would be appreciated. Sorry again I didn't manage to get a test case in the playground.
Thanks in advance.
Regards,
Is it possible to configure a different MaxConnectionCount for the sources and replica DBs used by the resolver?
I looked at the offical documentation, but it looks like it sets a global MaxConnection for all connections
DB.Use(
dbresolver.Register(dbresolver.Config{ /* xxx */ }).
SetConnMaxIdleTime(time.Hour).
SetConnMaxLifetime(24 * time.Hour).
SetMaxIdleConns(100).
SetMaxOpenConns(200)
)
Hi! Can i use dr.Call without datas in db.Register(config, datas...) ?
Callback is called if resolvers is not empty or called db.Register()
func (dr *DBResolver) Call(fc func(connPool gorm.ConnPool) error) error {
if dr.DB != nil {
for _, r := range dr.resolvers {
if err := r.call(fc); err != nil {
return err
}
}
} else {
dr.compileCallbacks = append(dr.compileCallbacks, fc)
}
return nil
}
I wanted to close the connections with dr.Call after db.Use(dbresolver)
clickhouse 通过 distribute engine表可以实现分布式写。
`
dsn1 := "192.168.1.1:9000"
dsn2 := "192.168.1.2:9000"
dsn3 := "192.168.1.3:9000"
dsn4 := "192.168.1.4:9000"
dbConn, err := gorm.Open(clickhouse.Open(dsn1), &gorm.Config{
SkipDefaultTransaction: true,
})
if err != nil {
fmt.Println(err)
return
}
err = dbConn.Use(dbresolver.Register(dbresolver.Config{
// `db2` 作为 sources,`db3`、`db4` 作为 replicas
Sources: []gorm.Dialector{clickhouse.Open(dsn2), clickhouse.Open(dsn1), clickhouse.Open(dsn3), clickhouse.Open(dsn4)},
// sources/replicas 负载均衡策略
Policy: dbresolver.RandomPolicy{},
}))
if err != nil {
fmt.Println(err)
return
}
err = dbConn.Transaction(func(tx *gorm.DB) error {
if err = tx.Create(&[]module.Tbl{t}).Error; err != nil {
return err
}
// 返回 nil 提交事务
return nil
})
`
有四个clickhouse实例构建一套集群,使用dbConn 写入 数据库的时候发现始终是通过 dsn1,读取可以通过四个dsn读取,这个是为什么?
期望:写入也是可以根据随机选取
使用dbresolver情况下,如何记录sql日志呢
import (
"gorm.io/gorm"
"gorm.io/plugin/dbresolver"
"gorm.io/driver/mysql"
)
DB, err := gorm.Open(mysql.Open("db1_dsn"), &gorm.Config{})
DB.Use(dbresolver.Register(dbresolver.Config{
// use `db2` as sources, `db3`, `db4` as replicas
Sources: []gorm.Dialector{mysql.Open("db2_dsn")},
Replicas: []gorm.Dialector{mysql.Open("db3_dsn"), mysql.Open("db4_dsn")},
// sources/replicas load balancing policy
Policy: dbresolver.RandomPolicy{},
}).Register(dbresolver.Config{
// use `db1` as sources (DB's default connection), `db5` as replicas for `User`, `Address`
Replicas: []gorm.Dialector{mysql.Open("db5_dsn")},
}, &User{}, &Address{}).Register(dbresolver.Config{
// use `db6`, `db7` as sources, `db8` as replicas for `orders`, `Product`
Sources: []gorm.Dialector{mysql.Open("db6_dsn"), mysql.Open("db7_dsn")},
Replicas: []gorm.Dialector{mysql.Open("db8_dsn")},
}, "orders", &Product{}, "secondary"))
logger := zapgorm2.New(zap.L())
logger.SetAsDefault()
DB.Logger = logger
走DB的sql,能记录到日志;
走其他的没法记录日志。
有两个mysql数据库需要同时连接使用,然后我想设置
DisableForeignKeyConstraintWhenMigrating
目前的情况是只有主连接会赋予该配置,而*dbresolver.DBResolver貌似不可以
var Conn *gorm.DB
func init() {
InitMultiDatabase()
// 设置一些gorm配置
Conn.Config.Apply(&gorm.Config{
DisableForeignKeyConstraintWhenMigrating: true,
PrepareStmt: true,
})
Conn.Logger = logger.NewGormLogger()
// 创建表
Conn.AutoMigrate(
&model.Customer{},
&model.CustomerBrand{},
&model.FlowNode{},
&model.FlowCurrent{},
&model.FlowUnknown{},
)
if !Conn.Migrator().HasTable(model.FlowUnknown{}) {
Conn.Migrator().CreateTable(&model.FlowUnknown{})
}
}
func InitMultiDatabase() {
var err error
Conn, err = gorm.Open(mysql.Open(config.DB_APP.DSN), &gorm.Config{})
if err != nil {
panic(err)
}
// 设置主库的线程池
sqlDB, err := Conn.DB()
sqlDB.SetMaxIdleConns(config.Pool.MaxIdleConns)
sqlDB.SetMaxOpenConns(config.Pool.MaxOpenConns)
sqlDB.SetConnMaxIdleTime(config.Pool.ConnMaxIdleTime)
sqlDB.SetConnMaxLifetime(config.Pool.ConnMaxLifetime)
if err != nil {
panic(err)
}
// 这里指定特定的表去特定的数据库
slover := dbresolver.Register(
dbresolver.Config{
Sources: []gorm.Dialector{mysql.Open(config.DB_DATA.DSN)},
},
&model.Agent{},
&model.Flow{},
&model.Global{},
&model.Item{},
)
// 设置连接池信息
slover.SetConnMaxIdleTime(config.Pool.ConnMaxIdleTime).
SetConnMaxLifetime(config.Pool.ConnMaxLifetime).
SetMaxIdleConns(config.Pool.MaxIdleConns).
SetMaxOpenConns(config.Pool.MaxOpenConns)
// 这里这么写会报错空指针
slover.Apply(&gorm.Config{
DisableForeignKeyConstraintWhenMigrating: true,
PrepareStmt: true,
})
Conn.Use(slover)
}
1、Stop the main library directly, and then switch between main and standby
2、log print: The MySQL server is running with the --read-only option so it cannot execute this statement
3、The read_only parameter confirms that it has been set to 0
4、 mysql 5.6
dsn: master
db2Dsn: slave
`
err = db.Use(dbresolver.Register(dbresolver.Config{
Replicas: []gorm.Dialector{mysql.Open(db2Dsn), mysql.Open(dsn)},
// sources/replicas 负载均衡策略, 默认随机
// todo - 比例随机均匀, 读主 读从. 如需调整,需要自定义 Policy
Policy: dbresolver.RandomPolicy{},
}, tabs...).
SetMaxOpenConns(slaveCnf.MaxOpenCons).
SetMaxIdleConns(slaveCnf.MaxIdleCons).
SetConnMaxLifetime(time.Duration(slaveCnf.MaxLifetime) * time.Second))
`
Here is the line where it fails. We are currently using GORM in bux: https://github.com/BuxOrg/bux
We just recently upgraded this package and now it is failing in the Query() method : stmt.DB.Callback().Query().Get("gorm:db_resolver")(stmt.DB)
goroutine 8906 [running]:
testing.tRunner.func1.2({0x184d840, 0x2b6ceb0})
/opt/hostedtoolcache/go/1.17.9/x64/src/testing/testing.go:1209 +0x24e
testing.tRunner.func1()
/opt/hostedtoolcache/go/1.17.9/x64/src/testing/testing.go:1212 +0x218
panic({0x184d840, 0x2b6ceb0})
/opt/hostedtoolcache/go/1.17.9/x64/src/runtime/panic.go:1038 +0x215
gorm.io/plugin/dbresolver.Operation.ModifyStatement({0x1a74da3, 0xc000f1d801}, 0xc000f1dc00)
/home/runner/go/pkg/mod/gorm.io/plugin/[email protected]/clauses.go:27 +0x1c6
gorm.io/gorm.(*DB).Clauses(0x1ee0850, {0xc000e9da50, 0x1, 0x18d0120})
/home/runner/go/pkg/mod/gorm.io/[email protected]/chainable_api.go:32 +0x24f
Additional Outputs from Debugging:
panic: interface conversion: *sql.DB is not interface { SetConnMaxIdleTime(time.Duration) }: missing method SetConnMaxIdleTime
When I use the code like this one, It panic.
DB.Use(
dbresolver.Register(dbresolver.Config{ /* xxx */ }).
SetConnMaxIdleTime(time.Hour).
SetConnMaxLifetime(24 * time.Hour).
SetMaxIdleConns(100).
SetMaxOpenConns(200))
but if I use
sqlDB, err := db.DB()
sqlDB.SetConnMaxLifetime(24 * time.Hour)
sqlDB.SetMaxIdleConns(100)
sqlDB.SetMaxOpenConns(200)
It plays well.
Doc is : Connection-Pool
Starting with version 1.4.2, dbresolver
is causing a stack overflow if the database connection is being closed. This issue is not happening with 1.4.1 or prior versions.
是否可以设置source参与读?
目前source是通过Clauses指定去source读取
type using not implements clause.Interface, I want to add the 'Name() string' method
// Name implements clause.Interface interface
func (u using) Name() string {
return usingName
}
// MergeClause implements clause.Interface interface
func (u using) MergeClause(*clause.Clause){}
Updating from v1.1.0 to v1.2.0 may cause panic because of the following line:
Line 49 in 691098d
Get("gorm:db_resolver")
would return nil
if db_resolver
is not set up correctly, which would cause "runtime error: invalid memory address or nil pointer dereference".
Hi, i'm trying to use WITH statement, and expected it should query from replica database instead of source database
Here is an example
WITH alias_regions AS (
SELECT * FROM regions
)
SELECT * FROM alias_regions
请问使用更新语句时,从库会自动更新吗?当我AutoMigrate表,insert数据时,主库有表和数据,从库没有任何动作,然后读是从从库中读,所以读不到数据,请问是为什么呢
Currently only available policy is random. We need a fallback policy where incase first replica fails we need to fallback to second replica.
Eg.
Source: M1 (master 1)
Replicas: R1 (Replica 1), R2 (Replica 2)
First two scenarios are currently handled we need fallback policy for third scenario.
We have replicas which would be in process of updation to master in case of a disaster. Thus first replica may become unavailable while second replica continues to function. In such scenario a Fallback policy would be required.
There is a bug in dbresolver.ParamsFilter.
27: sql, params = filter.ParamsFilter(ctx, sql, params)
This is incorrect, and the correct answer is
27: sql, params = filter.ParamsFilter(ctx, sql, params...)
My gorm version is:
github.com/jinzhu/gorm v1.9.16
I want to introduce DB resolver into our project to achieve a read/write split and so on.
But I found the DB resolver can't match the using gorm version.
Can u give me some advice?
While using gorm.io/plugin/dbresolver
to split read-write, raw query having whitespace(s) as prefix should go to read server.
import (
"gorm.io/gorm"
"gorm.io/plugin/dbresolver"
"gorm.io/driver/mysql"
)
DB, err := gorm.Open(mysql.Open("db1_dsn"), &gorm.Config{})
DB.Use(dbresolver.Register(dbresolver.Config{
// use `db1` as sources, `db2` as replicas for
Sources: []gorm.Dialector{mysql.Open("db1_dsn")},
Replicas: []gorm.Dialector{mysql.Open("db2_dsn")}
}, "users"))
// ...
readDB, err := gorm.Open(mysql.Open("db2_dsn"), &gorm.Config{})
if err != nil {
// handle error
}
readDB.Create(&User{Name: "read"})
// this query should go to the read db with db2_dsn
DB.Raw(`
select name from users
where name = ?
`, "read").Row().Scan(&name)
This query should go to the read db with db2_dsn
, but the current dbresolver sends it to the write db with db1_dsn
.
When replicas can't use, but sources can use, SELECT
command while be use sources. when replicas service recovery, it also use
sources.
So, I want to add a option, control force use replicas service. Even if the replicas service is not available。
Also mysql service is one master and more slave.
if read request very large, use sources service is very danger.
We are using ROW LEVEL SECURITY to achieve multi tenancy. Since RLS is based on transactions we are unable to utilize the read replicas feature as it always forwards the connections to the write replica.
We would like to be able to control this behavior via some flag to the query operation.
请问可不可支持手动切换连接后,再使用事务
例如:
tx = db.Clauses(dbresolver.Use("secondary")).Begin()
dbresolver after updating to v1.5.0 logs time as [1990-01-01 00:00:00 +0000 UTC 1] instead of "1990-01-01 00:00:00".
It differs from v1.4.7 behavior so I think it's a regression.
Minimal example to reproduce bug:
func main() {
db, _ := gorm.Open(sqlite.Open("gorm.db"), &gorm.Config{
Logger: logger.Default.LogMode(logger.Info),
})
db.Use(dbresolver.Register(dbresolver.Config{
Sources: []gorm.Dialector{sqlite.Open("gorm.db")},
TraceResolverMode: true,
}))
db.Raw("SELECT ?", time.Now()).Row()
}
v1.5.0: [0.167ms] [rows:-] [source] SELECT "[2023-12-05 20:56:41.997565 +0900 JST m=+0.004338433]"
v1.4.7: [0.156ms] [rows:-] [source] SELECT "2023-12-05 20:58:36.787"
1.请关注如下参数设置的连接池最大连接数目,该参数无效, 我设置的连接池最大连接数为30, 实际做并发测试的时候,mysql 显示客户端数量为本次测试并发值(128)+已有连接. 但是基于 database/sql 原生方式操作,结果为: 30++已有连接.
2.请作者检查一下代码,看看是我写的有问题,还是存在bug. 请求帮忙解决此问题.
gormDb, err := gorm.Open(mysql.Open("Dsn"), &gorm.Config{
})
if err != nil {
//gorm 数据库驱动初始化失败
return nil, err
}
var resolverConf dbresolver.Config
// 如果开启了读写分离,配置读数据库(resource、read、replicas)
resolverConf = dbresolver.Config{
Sources: []gorm.Dialector{mysql.Open("WriteDsn")},
Replicas: []gorm.Dialector{mysql.Open("ReadDsn")},
Policy: dbresolver.RandomPolicy{},
}
err = gormDb.Use(dbresolver.Register(resolverConf, "").SetConnMaxIdleTime(time.Minute).
SetConnMaxLifetime(10* time.Second).
SetMaxIdleConns(10).
SetMaxOpenConns(30)) // 这里设置的连接池最大连接为30
if err != nil {
return nil, err
}
// 并发性能测试,同时测试连接池
var wg sync.WaitGroup
// 并发最大连接数设置为 128 进行本次测试, 问题如下:
// 如果使用 database/sql 原生操作,在数据库使用 show processlist 命令查看,程序执行期间的连接为 30+已有连接
// 使用 gorm 测试,连接池的上限根本无效,数据库的连接就是 128+已有连接
var conNum = make(chan uint16, 128)
wg.Add(1000)
time1 := time.Now()
for i := 1; i <= 1000; i++ {
conNum <- 1
go func() {
defer func() {
<-conNum
wg.Done()
}()
var received []tb_code_lists
variable.GormDbMysql.Table("tb_code_list").Select("code", "name", "company_name", "province", "city", "remark", "status", "created_at", "updated_at").Where("id<=?", 3500).Find(&received)
//fmt.Printf("本次读取的数据条数:%d\n",len(received))
}()
}
wg.Wait()
fmt.Printf("耗时(ms):%d\n", time.Now().Sub(time1).Milliseconds())
gorm:
r1 := db.Where("type_of = ? AND id = ?", 3, 37).Find(&pojo.push_dictionarie{}).Limit(1)
normal situation config:
TraceResolverMode: false
when TraceResolverMode is false, my sql look like:
[3.199ms] [rows:0] SELECT * FROM `push_dictionaries` WHERE type_of = 3 AND id = 73 AND `push_dictionaries`.`deleted_at` IS NULL
You can see that this is correct.
but when abnormal situation config:
TraceResolverMode: true
This sql looks very strange
[3.710ms] [rows:0] [replica] SELECT * FROM `push_dictionaries` WHERE type_of = '[3 73]' AND id = ? AND `push_dictionaries`.`deleted_at` IS NULL
int is forced to become a slice
go.mod引用:
gorm.io/gen v0.3.16
gorm.io/gorm v1.24.0
gorm.io/plugin/dbresolver v1.2.3
编译出现如下错误:
2022/10/08 10:15:38 ERROR ▶ 0006 Failed to build the application: # gorm.io/plugin/dbresolver
/go/pkg/mod/gorm.io/plugin/[email protected]/dbresolver.go:139:18:
cannot use map[string]gorm.Stmt{} (value of type map[string]gorm.Stmt) as type map[string]*gorm.Stmt in struct literal
An option in the configuration (or something) to pass in (1) a function and (2) a timeout for a given DB, where the function is called after the timeout duration continuously (that is, waits for the timeout and calls the function, in a loop, forever) and the function returns database connection credentials and the DB reconnects with those credentials.
I want to be able to use instances of AWS Redshift's postgres-based database. That's all well and fine, you can just request credentials through their API and use them for connections. The issue is that the longest those credentials are valid is 60 minutes. I need to be able to have Redshift always connected so I can serve data from it as responses to API hits. For example, I could write a function that requests credentials from Redshift that would remain valid for 60 minutes, and specify a timeout of 59 minutes. If a reconnection fails, it could possibly default back to the old credentials/connection and restart the timeout.
So, something where I can essentially write a credentials-providing function and provide a timeout so that the DB is always connected in situations like this would be immensely helpful. I imagine it would also be useful for similar products or connection limitation schemes from cloud providers like Azure. This could be theoretically used for some niche situation where someone wants to roll between different credential sets.
None known
I want to access informix, sql server, postres sql in the same database resolver.
But informix is an old server and doesn't enable trasaction, I have to enable SkipDefaultTransaction in gorm.Config to avoid errors.
on the other hand, other databases are working with transactions enabled, is there a possible way to set gorm.Config to every database registered in resolver?
it's seems all resolvers are share same gorm.Config from main DB, my current code here below:
func BuildDbResolver() (*gorm.DB, error) {
mainDB, err := gorm.Open(
postgres.Open(
fmt.Sprintf(
"host=%s port=%d user=%s dbname=%s password=%s sslmode=disable",
PgErpConfig.Address,
PgErpConfig.Port,
PgErpConfig.UserName,
PgErpConfig.Name,
PgErpConfig.Password,
),
),
&gorm.Config{Logger: NewLogger()},
)
if err != nil {
return nil, fmt.Errorf("failed to connect to postgreSQL database: %v", err)
}
resolver := dbresolver.Register(dbresolver.Config{
Sources: []gorm.Dialector{ifx.Open(Ids12Config.DSN)}, // how to apply gorm.Config with SkipDefaultTransaction = 'true' to informix only?
TraceResolverMode: true,
}).Register(dbresolver.Config{
Sources: []gorm.Dialector{sqlserver.Open(EdisonConfig.DSN)},
TraceResolverMode: true,
})
mainDB.Use(resolver)
return mainDB, nil
}
I am using the Read/Write splitting feature. I would like to log for each query which DSN it goes to (or at least if it goes to a source or replica).
This would be used for troubleshooting failed or slow DB instances, and for testing.
N/A.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.