Git Product home page Git Product logo

dbresolver's People

Contributors

9monsters avatar a631807682 avatar dependabot[bot] avatar dotpack avatar flc1125 avatar iredmail avatar jinzhu avatar lzakharov avatar qqxhb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbresolver's Issues

Possible to use gorm Preload with dbresolver?

Describe the feature

Is is possible to use gorm preload to preload data from multiple db sources?
Currently It seems that the db connection will use the first source, so the table from the second source can not be reached when preloading.

Motivation

Support gorm preload when it comes to multiple db sources

Related Issues

add change service operation hook

Describe the feature

add a option, when service operation change to write or read, call use function.

Motivation

I will be write trace log. I need know service operation current status (write / read).

so if add a hook, I can know service operation current status

How to implement Custom Policy for DBResolver with Zone or other customized Information

I'm currently working on a project where I need to implement a custom policy for DBResolver that takes into account the zone information of each database connection. The goal is to preferentially select a connection from a specific zone when resolving the connection pool.

I've created a custom struct DBWithZone that embeds *gorm.DB, includes a Zone field and implements the gorm.ConnPool interface. I've also implemented a custom policy NearZonePolicy that attempts to select a DBWithZone from the connection pool based on the preferred zone.

However, I've encountered an issue where the gorm.ConnPool in the Resolve method of my custom policy is actually of type *sql.DB, and I can't directly convert it to my DBWithZone type.

Here's a simplified version of my code:

type DBWithZone struct {
 *gorm.DB
 Zone string
}

type NearZonePolicy struct {
 PreferredZone string
}

func (n *NearZonePolicy) Resolve(connPools []gorm.ConnPool) gorm.ConnPool {
 for _, pool := range connPools {
  if dbWithZone, ok := pool.(*DBWithZone); ok {
   if dbWithZone.Zone == n.PreferredZone {
    return dbWithZone.DB
   }
  }
 }
 return connPools[0]
}

In the Resolve method, the type assertion pool.(*DBWithZone) fails because pool is of type *sql.DB.

I'm looking for a way to associate each gorm.ConnPool (or *sql.DB) with its corresponding zone information so that I can implement my custom policy. Is there a recommended way to achieve this with GORM DBResolver? Any guidance would be greatly appreciated.

Maybe we need a factory func to let Gorm know how to create a customized gorm.ConnPool implementation instead of *sql.DB.

Thank you.

使用 dbresolver 后,db.AutoMigrate 会访问到空指针

我后面使用TiDB,可直接连接多个地址,所以使用了 dbresolver。
在创建gorm时,发现使用了一个nil的gorm.Dialector。后面db.AutoMigrate 就会读到空指针。
请问这里 gorm.Open 是必须要传入一个正确的链接么?后面不是已经使用了dbresolver了吗?
辛苦解答一下,谢谢。

db, err := gorm.Open(nil, &gorm.Config{})
	if err != nil {
		return err
	}

err = db.Use(dbresolver.Register(dbresolver.Config{Sources: dias, Policy: dbresolver.RandomPolicy{}}))
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x8a46aa]

goroutine 1 [running]:
gorm.io/gorm.(*DB).Migrator(0x40f227?)
        /home/sean/code/go/pkg/mod/gorm.io/[email protected]/migrator.go:23 +0xaa
gorm.io/gorm.(*DB).AutoMigrate(0xc000517d40?, {0xc00050fb50, 0x1, 0x1})
        /home/sean/code/go/pkg/mod/gorm.io/[email protected]/migrator.go:28 +0x28
gitee.com/pingcap_enterprise/tidb-enterprise-manager/pkg/storage.AutoMigrate(...)
        /home/sean/code/src/gitee.com/pingcap_enterprise/tidb-enterprise-manager/pkg/storage/init.go:58
gitee.com/pingcap_enterprise/tidb-enterprise-manager/pkg/storage.InitStorage(0xc00013e130)
        /home/sean/code/src/gitee.com/pingcap_enterprise/tidb-enterprise-manager/pkg/storage/init.go:50 +0x2dd
main.initialize()
        /home/sean/code/src/gitee.com/pingcap_enterprise/tidb-enterprise-manager/cmd/apiserver/main.go:55 +0x1d0
main.main()
        /home/sean/code/src/gitee.com/pingcap_enterprise/tidb-enterprise-manager/cmd/apiserver/main.go:64 +0x1d
exit status 2

麻烦作者更新一下 go.mod 声明的依赖包版本号

麻烦作者更新一下 go.mod 声明的依赖包版本号

  • 1.由于我的项目已经引用了 gorm 的最新版本 v1.21.12
  • 2.继续使用该读写分离包,就会依赖老版本,这样我本地就会存在多个版本的 gorm包。
  • 3.请作者及时更新一下此包,并且为此包打一个新版本的 tag
# 以下是 dbresolver 包声明的依赖包版本号,请及时更新
require (
	gorm.io/driver/mysql v1.1.0   //  目前是 v1.1.1
	gorm.io/gorm v1.21.9  //  目前是 v1.21.12
)

Policy never called, never load balancing

GORM Playground Link

Sorry, I tried, but the playground code is way too complicated to understand, I couldn't figure out how to create a test for this case.

Here is a small self-contained test to reproduce:

package main

import (
        "fmt"
        "log"
        "math/rand"

        "gorm.io/driver/postgres"
        "gorm.io/gorm"
        "gorm.io/plugin/dbresolver"
)

type RandomPolicy struct{}

func (RandomPolicy) Resolve(connPools []gorm.ConnPool) gorm.ConnPool {
        fmt.Printf("----------- POLICY\n")
        return connPools[rand.Intn(len(connPools))]
}

type User struct {
        ID string `gorm:"primaryKey"`
}

func (*User) TableName() string { return "users" }

func test() error {
        baseDSN := "database=local_db user=root password=root sslmode=disable"
        db, err := gorm.Open(postgres.Open(baseDSN+" host=db"), &gorm.Config{})
        if err != nil {
                return err
        }
        resolver := dbresolver.Register(dbresolver.Config{
                Replicas:          []gorm.Dialector{postgres.Open(baseDSN + " host=db-replica-0")},
                Policy:            RandomPolicy{},
                TraceResolverMode: true,
        })
        if err := db.Use(resolver); err != nil {
                return err
        }

        for i := 0; i < 10; i++ {
                _ = db.Session(&gorm.Session{}).Find(&User{}).Error
        }
        return nil
}

func main() {
        if err := test(); err != nil {
                log.Fatal(err)
        }
}

Result:

[0.798ms] [rows:0] [replica] SELECT * FROM "users"
[0.259ms] [rows:0] [replica] SELECT * FROM "users"
[0.199ms] [rows:0] [replica] SELECT * FROM "users"
[0.205ms] [rows:0] [replica] SELECT * FROM "users"
[0.284ms] [rows:0] [replica] SELECT * FROM "users"
[0.205ms] [rows:0] [replica] SELECT * FROM "users"
[0.187ms] [rows:0] [replica] SELECT * FROM "users"
[0.207ms] [rows:0] [replica] SELECT * FROM "users"
[0.182ms] [rows:0] [replica] SELECT * FROM "users"
[0.177ms] [rows:0] [replica] SELECT * FROM "users"

It always use the replica and the printf in the policy is not there indicating that the provided policy is never called.

Description

The Policy is never called.

The docs (https://gorm.io/docs/dbresolver.html#Load-Balancing) mention that GORM supports load balancing (and uses it by default), however, with or without policy, it always uses the read replica.

Running the provided code shows that the policy is never called.

Removing the policy also results in only the replica being used, no load balancing.

Using db.Clauses(dbresolver.Write) properly changes the target from replica to source.

Am I missing something or is it an issue with the lib?

Any pointers would be appreciated. Sorry again I didn't manage to get a test case in the playground.

Thanks in advance.

Regards,

Config different max connc

Your Question

Is it possible to configure a different MaxConnectionCount for the sources and replica DBs used by the resolver?

The document you expected this should be explained

I looked at the offical documentation, but it looks like it sets a global MaxConnection for all connections

DB.Use(
  dbresolver.Register(dbresolver.Config{ /* xxx */ }).
  SetConnMaxIdleTime(time.Hour).
  SetConnMaxLifetime(24 * time.Hour).
  SetMaxIdleConns(100).
  SetMaxOpenConns(200)
)

Use dr.Call() without datas into db.Register(config, datas...)

Hi! Can i use dr.Call without datas in db.Register(config, datas...) ?

Callback is called if resolvers is not empty or called db.Register()

func (dr *DBResolver) Call(fc func(connPool gorm.ConnPool) error) error {
	if dr.DB != nil {
		for _, r := range dr.resolvers {
			if err := r.call(fc); err != nil {
				return err
			}
		}
	} else {
		dr.compileCallbacks = append(dr.compileCallbacks, fc)
	}

	return nil
}

I wanted to close the connections with dr.Call after db.Use(dbresolver)

multi source write

clickhouse 通过 distribute engine表可以实现分布式写。
`
dsn1 := "192.168.1.1:9000"
dsn2 := "192.168.1.2:9000"
dsn3 := "192.168.1.3:9000"
dsn4 := "192.168.1.4:9000"

dbConn, err := gorm.Open(clickhouse.Open(dsn1), &gorm.Config{
	SkipDefaultTransaction: true,
})
if err != nil {
	fmt.Println(err)
	return
}


err = dbConn.Use(dbresolver.Register(dbresolver.Config{
	// `db2` 作为 sources,`db3`、`db4` 作为 replicas
	Sources: []gorm.Dialector{clickhouse.Open(dsn2), clickhouse.Open(dsn1), clickhouse.Open(dsn3), clickhouse.Open(dsn4)},
	// sources/replicas 负载均衡策略
	Policy: dbresolver.RandomPolicy{},
}))
if err != nil {
	fmt.Println(err)
	return
}

write data

err = dbConn.Transaction(func(tx *gorm.DB) error {
	if err = tx.Create(&[]module.Tbl{t}).Error; err != nil {
		return err
	}
	// 返回 nil 提交事务
	return nil
})

`
有四个clickhouse实例构建一套集群,使用dbConn 写入 数据库的时候发现始终是通过 dsn1,读取可以通过四个dsn读取,这个是为什么?
期望:写入也是可以根据随机选取

使用dbresolver情况下,如何记录sql日志呢

使用dbresolver情况下,如何记录sql日志呢

import (
  "gorm.io/gorm"
  "gorm.io/plugin/dbresolver"
  "gorm.io/driver/mysql"
)

DB, err := gorm.Open(mysql.Open("db1_dsn"), &gorm.Config{})

DB.Use(dbresolver.Register(dbresolver.Config{
  // use `db2` as sources, `db3`, `db4` as replicas
  Sources:  []gorm.Dialector{mysql.Open("db2_dsn")},
  Replicas: []gorm.Dialector{mysql.Open("db3_dsn"), mysql.Open("db4_dsn")},
  // sources/replicas load balancing policy
  Policy: dbresolver.RandomPolicy{},
}).Register(dbresolver.Config{
  // use `db1` as sources (DB's default connection), `db5` as replicas for `User`, `Address`
  Replicas: []gorm.Dialector{mysql.Open("db5_dsn")},
}, &User{}, &Address{}).Register(dbresolver.Config{
  // use `db6`, `db7` as sources, `db8` as replicas for `orders`, `Product`
  Sources:  []gorm.Dialector{mysql.Open("db6_dsn"), mysql.Open("db7_dsn")},
  Replicas: []gorm.Dialector{mysql.Open("db8_dsn")},
}, "orders", &Product{}, "secondary"))

logger := zapgorm2.New(zap.L())
logger.SetAsDefault()
DB.Logger = logger

走DB的sql,能记录到日志;
走其他的没法记录日志。

如何对slover设置gorm.Config

我目前的情况

有两个mysql数据库需要同时连接使用,然后我想设置DisableForeignKeyConstraintWhenMigrating
目前的情况是只有主连接会赋予该配置,而*dbresolver.DBResolver貌似不可以

var Conn *gorm.DB

func init() {
	InitMultiDatabase()

	// 设置一些gorm配置
	Conn.Config.Apply(&gorm.Config{
		DisableForeignKeyConstraintWhenMigrating: true,
		PrepareStmt:                              true,
	})

	Conn.Logger = logger.NewGormLogger()

	// 创建表
	Conn.AutoMigrate(
	 	&model.Customer{},
	 	&model.CustomerBrand{},
		&model.FlowNode{},
		&model.FlowCurrent{},
		&model.FlowUnknown{},
	)
	if !Conn.Migrator().HasTable(model.FlowUnknown{}) {
		Conn.Migrator().CreateTable(&model.FlowUnknown{})
	}
}

func InitMultiDatabase() {
	var err error
	Conn, err = gorm.Open(mysql.Open(config.DB_APP.DSN), &gorm.Config{})
	if err != nil {
		panic(err)
	}
	// 设置主库的线程池
	sqlDB, err := Conn.DB()
	sqlDB.SetMaxIdleConns(config.Pool.MaxIdleConns)
	sqlDB.SetMaxOpenConns(config.Pool.MaxOpenConns)
	sqlDB.SetConnMaxIdleTime(config.Pool.ConnMaxIdleTime)
	sqlDB.SetConnMaxLifetime(config.Pool.ConnMaxLifetime)
	if err != nil {
		panic(err)
	}

	// 这里指定特定的表去特定的数据库
	slover := dbresolver.Register(
		dbresolver.Config{
			Sources: []gorm.Dialector{mysql.Open(config.DB_DATA.DSN)},
		},
		&model.Agent{},
		&model.Flow{},
		&model.Global{},
		&model.Item{},
	)

	// 设置连接池信息
	slover.SetConnMaxIdleTime(config.Pool.ConnMaxIdleTime).
		SetConnMaxLifetime(config.Pool.ConnMaxLifetime).
		SetMaxIdleConns(config.Pool.MaxIdleConns).
		SetMaxOpenConns(config.Pool.MaxOpenConns)

	// 这里这么写会报错空指针
	slover.Apply(&gorm.Config{
		DisableForeignKeyConstraintWhenMigrating: true,
		PrepareStmt:                              true,
	})

	Conn.Use(slover)
}

An error occurred after the mysql active/standby switch:The MySQL server is running with the --read-only option so it cannot execute this statement

GORM Playground Link

go-gorm/playground#1

Description

1、Stop the main library directly, and then switch between main and standby
2、log print: The MySQL server is running with the --read-only option so it cannot execute this statement
3、The read_only parameter confirms that it has been set to 0
4、 mysql 5.6
dsn: master
db2Dsn: slave
`

           err = db.Use(dbresolver.Register(dbresolver.Config{
		Replicas: []gorm.Dialector{mysql.Open(db2Dsn), mysql.Open(dsn)},
		// sources/replicas 负载均衡策略, 默认随机
		// todo - 比例随机均匀, 读主 读从. 如需调整,需要自定义 Policy
		Policy: dbresolver.RandomPolicy{},
	}, tabs...).
		SetMaxOpenConns(slaveCnf.MaxOpenCons).
		SetMaxIdleConns(slaveCnf.MaxIdleCons).
		SetConnMaxLifetime(time.Duration(slaveCnf.MaxLifetime) * time.Second))

`

Upgraded from 1.1.0 to 1.2.0, creates a panic in Clauses:27

Here is the line where it fails. We are currently using GORM in bux: https://github.com/BuxOrg/bux

We just recently upgraded this package and now it is failing in the Query() method : stmt.DB.Callback().Query().Get("gorm:db_resolver")(stmt.DB)

goroutine 8906 [running]:
testing.tRunner.func1.2({0x184d840, 0x2b6ceb0})
	/opt/hostedtoolcache/go/1.17.9/x64/src/testing/testing.go:1209 +0x24e
testing.tRunner.func1()
	/opt/hostedtoolcache/go/1.17.9/x64/src/testing/testing.go:1212 +0x218
panic({0x184d840, 0x2b6ceb0})
	/opt/hostedtoolcache/go/1.17.9/x64/src/runtime/panic.go:1038 +0x215
gorm.io/plugin/dbresolver.Operation.ModifyStatement({0x1a74da3, 0xc000f1d801}, 0xc000f1dc00)
	/home/runner/go/pkg/mod/gorm.io/plugin/[email protected]/clauses.go:27 +0x1c6
gorm.io/gorm.(*DB).Clauses(0x1ee0850, {0xc000e9da50, 0x1, 0x18d0120})
	/home/runner/go/pkg/mod/gorm.io/[email protected]/chainable_api.go:32 +0x24f

Additional Outputs from Debugging:

Screen Shot 2022-05-02 at 9 23 53 AM

Screen Shot 2022-05-02 at 9 24 11 AM

Screen Shot 2022-05-02 at 9 24 37 AM

Screen Shot 2022-05-02 at 9 19 58 AM

Screen Shot 2022-05-02 at 9 19 52 AM

panic: interface conversion: *sql.DB is not interface { SetConnMaxIdleTime(time.Duration) }: missing method SetConnMaxIdleTime

panic: interface conversion: *sql.DB is not interface { SetConnMaxIdleTime(time.Duration) }: missing method SetConnMaxIdleTime

When I use the code like this one, It panic.

DB.Use(  
    dbresolver.Register(dbresolver.Config{ /* xxx */ }).
    SetConnMaxIdleTime(time.Hour).  
    SetConnMaxLifetime(24 * time.Hour).  
    SetMaxIdleConns(100).  
    SetMaxOpenConns(200))

but if I use

 sqlDB, err := db.DB()
 sqlDB.SetConnMaxLifetime(24 * time.Hour)
 sqlDB.SetMaxIdleConns(100)
 sqlDB.SetMaxOpenConns(200)

It plays well.

Doc is : Connection-Pool

type using not implements clause.Interface

Describe the feature

type using not implements clause.Interface, I want to add the 'Name() string' method

// Name implements clause.Interface interface
func (u using) Name() string {
	return usingName
}

// MergeClause implements clause.Interface interface
func (u using) MergeClause(*clause.Clause){}

Motivation

Related Issues

主从复制

从库会自动更新吗?

请问使用更新语句时,从库会自动更新吗?当我AutoMigrate表,insert数据时,主库有表和数据,从库没有任何动作,然后读是从从库中读,所以读不到数据,请问是为什么呢

Fallback Policy for Replicas

Describe the feature

Currently only available policy is random. We need a fallback policy where incase first replica fails we need to fallback to second replica.

Eg.
Source: M1 (master 1)
Replicas: R1 (Replica 1), R2 (Replica 2)

  • Normal Scenario - Writes Served by M1, Reads Served by R1
  • Disaster Master Down - Writes don't work, Reads Served by R1
  • R1 Promoted to Master (R1 becomes M1) - Writes Served by M1, Reads Fail.
    In this Scenario we want Reads to Shift to R2.

First two scenarios are currently handled we need fallback policy for third scenario.

We have replicas which would be in process of updation to master in case of a disaster. Thus first replica may become unavailable while second replica continues to function. In such scenario a Fallback policy would be required.

dbresolver.ParamsFilter Bug

GORM Playground Link

go-gorm/playground#1

Description

There is a bug in dbresolver.ParamsFilter.
27: sql, params = filter.ParamsFilter(ctx, sql, params)
This is incorrect, and the correct answer is
27: sql, params = filter.ParamsFilter(ctx, sql, params...)

My lower version of gorm

My gorm version is:
github.com/jinzhu/gorm v1.9.16

I want to introduce DB resolver into our project to achieve a read/write split and so on.

But I found the DB resolver can't match the using gorm version.

Can u give me some advice?

Raw SQLs having whitespace(s) as prefix are going to write DB

GORM Playground Link

go-gorm/playground#239

Description

While using gorm.io/plugin/dbresolver to split read-write, raw query having whitespace(s) as prefix should go to read server.

import (
  "gorm.io/gorm"
  "gorm.io/plugin/dbresolver"
  "gorm.io/driver/mysql"
)

DB, err := gorm.Open(mysql.Open("db1_dsn"), &gorm.Config{})

DB.Use(dbresolver.Register(dbresolver.Config{
  // use `db1` as sources, `db2` as replicas for 
  Sources:  []gorm.Dialector{mysql.Open("db1_dsn")},
  Replicas: []gorm.Dialector{mysql.Open("db2_dsn")}
}, "users"))


// ...

readDB, err := gorm.Open(mysql.Open("db2_dsn"), &gorm.Config{})
if err != nil {
  // handle error
}

readDB.Create(&User{Name: "read"})

// this query should go to the read db with db2_dsn
DB.Raw(`
select name from users
where name = ?
`, "read").Row().Scan(&name)

This query should go to the read db with db2_dsn, but the current dbresolver sends it to the write db with db1_dsn.

force add replicas when replicas is can't use

Describe the feature

When replicas can't use, but sources can use, SELECT command while be use sources. when replicas service recovery, it also use
sources.

So, I want to add a option, control force use replicas service. Even if the replicas service is not available。

Motivation

Also mysql service is one master and more slave.

if read request very large, use sources service is very danger.

I want to be able to enable read replicas for transactions as well

Describe the feature

We are using ROW LEVEL SECURITY to achieve multi tenancy. Since RLS is based on transactions we are unable to utilize the read replicas feature as it always forwards the connections to the write replica.
We would like to be able to control this behavior via some flag to the query operation.

事务相关的功能

Describe the feature

请问可不可支持手动切换连接后,再使用事务
例如:
tx = db.Clauses(dbresolver.Use("secondary")).Begin()

Motivation

Related Issues

v1.5.0 regression

GORM Playground Link

go-gorm/playground#667

Description

dbresolver after updating to v1.5.0 logs time as [1990-01-01 00:00:00 +0000 UTC 1] instead of "1990-01-01 00:00:00".
It differs from v1.4.7 behavior so I think it's a regression.

Minimal example to reproduce bug:

func main() {
	db, _ := gorm.Open(sqlite.Open("gorm.db"), &gorm.Config{
		Logger: logger.Default.LogMode(logger.Info),
	})

	db.Use(dbresolver.Register(dbresolver.Config{
		Sources:           []gorm.Dialector{sqlite.Open("gorm.db")},
		TraceResolverMode: true,
	}))

	db.Raw("SELECT ?", time.Now()).Row()
}

v1.5.0: [0.167ms] [rows:-] [source] SELECT "[2023-12-05 20:56:41.997565 +0900 JST m=+0.004338433]"
v1.4.7: [0.156ms] [rows:-] [source] SELECT "2023-12-05 20:58:36.787"

连接池参数无效

问题点

1.请关注如下参数设置的连接池最大连接数目,该参数无效, 我设置的连接池最大连接数为30, 实际做并发测试的时候,mysql 显示客户端数量为本次测试并发值(128)+已有连接. 但是基于 database/sql 原生方式操作,结果为: 30++已有连接.
2.请作者检查一下代码,看看是我写的有问题,还是存在bug. 请求帮忙解决此问题.

	gormDb, err := gorm.Open(mysql.Open("Dsn"), &gorm.Config{
	})
	if err != nil {
		//gorm 数据库驱动初始化失败
		return nil, err
	}
	var resolverConf dbresolver.Config

	// 如果开启了读写分离,配置读数据库(resource、read、replicas)
		resolverConf = dbresolver.Config{
			Sources:  []gorm.Dialector{mysql.Open("WriteDsn")}, 
			Replicas: []gorm.Dialector{mysql.Open("ReadDsn")},  
			Policy:   dbresolver.RandomPolicy{}, 
		}
		
	err = gormDb.Use(dbresolver.Register(resolverConf, "").SetConnMaxIdleTime(time.Minute).
		SetConnMaxLifetime(10* time.Second).
		SetMaxIdleConns(10).
		SetMaxOpenConns(30))   // 这里设置的连接池最大连接为30 
	if err != nil {
		return nil, err
	}
	

	//  并发性能测试,同时测试连接池
	var wg sync.WaitGroup
	// 并发最大连接数设置为 128 进行本次测试, 问题如下:
	// 如果使用 database/sql 原生操作,在数据库使用  show  processlist 命令查看,程序执行期间的连接为 30+已有连接
	// 使用 gorm 测试,连接池的上限根本无效,数据库的连接就是 128+已有连接

	var conNum = make(chan uint16, 128)
	wg.Add(1000)
	time1 := time.Now()
	for i := 1; i <= 1000; i++ {
		conNum <- 1
		go func() {
			defer func() {
				<-conNum
				wg.Done()
			}()
			var received []tb_code_lists
			variable.GormDbMysql.Table("tb_code_list").Select("code", "name", "company_name", "province", "city", "remark", "status", "created_at", "updated_at").Where("id<=?", 3500).Find(&received)
			//fmt.Printf("本次读取的数据条数:%d\n",len(received))
		}()
	}
	wg.Wait()
	fmt.Printf("耗时(ms):%d\n", time.Now().Sub(time1).Milliseconds())

When I configured "TraceResolverMode: true" my sql statements became strange

gorm:

r1 := db.Where("type_of = ? AND id = ?", 3, 37).Find(&pojo.push_dictionarie{}).Limit(1)

normal situation config:

TraceResolverMode: false

when TraceResolverMode is false, my sql look like:

[3.199ms] [rows:0] SELECT * FROM `push_dictionaries` WHERE type_of = 3 AND id = 73 AND `push_dictionaries`.`deleted_at` IS NULL

You can see that this is correct.

but when abnormal situation config:

TraceResolverMode: true

This sql looks very strange

[3.710ms] [rows:0] [replica] SELECT * FROM `push_dictionaries` WHERE type_of = '[3 73]' AND id = ? AND `push_dictionaries`.`deleted_at` IS NULL

int is forced to become a slice

dbresolver v1.2.3编译错误

GORM Playground Link

go-gorm/playground#1

Description

go.mod引用:

	gorm.io/gen v0.3.16
	gorm.io/gorm v1.24.0
	gorm.io/plugin/dbresolver v1.2.3

编译出现如下错误:

2022/10/08 10:15:38 ERROR    ▶ 0006 Failed to build the application: # gorm.io/plugin/dbresolver
/go/pkg/mod/gorm.io/plugin/[email protected]/dbresolver.go:139:18: 
cannot use map[string]gorm.Stmt{} (value of type map[string]gorm.Stmt) as type map[string]*gorm.Stmt in struct literal

截图如下:
image

Credential Sourcing and Reconnection on Timeout

Describe the feature

An option in the configuration (or something) to pass in (1) a function and (2) a timeout for a given DB, where the function is called after the timeout duration continuously (that is, waits for the timeout and calls the function, in a loop, forever) and the function returns database connection credentials and the DB reconnects with those credentials.

Motivation

I want to be able to use instances of AWS Redshift's postgres-based database. That's all well and fine, you can just request credentials through their API and use them for connections. The issue is that the longest those credentials are valid is 60 minutes. I need to be able to have Redshift always connected so I can serve data from it as responses to API hits. For example, I could write a function that requests credentials from Redshift that would remain valid for 60 minutes, and specify a timeout of 59 minutes. If a reconnection fails, it could possibly default back to the old credentials/connection and restart the timeout.

So, something where I can essentially write a credentials-providing function and provide a timeout so that the DB is always connected in situations like this would be immensely helpful. I imagine it would also be useful for similar products or connection limitation schemes from cloud providers like Azure. This could be theoretically used for some niche situation where someone wants to roll between different credential sets.

Related Issues

None known

Is there a possible way to set multiple gorm.Config to each database in the database resolver?

I want to access informix, sql server, postres sql in the same database resolver.

But informix is an old server and doesn't enable trasaction, I have to enable SkipDefaultTransaction in gorm.Config to avoid errors.
on the other hand, other databases are working with transactions enabled, is there a possible way to set gorm.Config to every database registered in resolver?

it's seems all resolvers are share same gorm.Config from main DB, my current code here below:

func BuildDbResolver() (*gorm.DB, error) {
	mainDB, err := gorm.Open(
		postgres.Open(
			fmt.Sprintf(
				"host=%s port=%d user=%s dbname=%s password=%s sslmode=disable",
				PgErpConfig.Address,
				PgErpConfig.Port,
				PgErpConfig.UserName,
				PgErpConfig.Name,
				PgErpConfig.Password,
			),
		),
		&gorm.Config{Logger: NewLogger()},
	)
	if err != nil {
		return nil, fmt.Errorf("failed to connect to postgreSQL database: %v", err)
	}

	resolver := dbresolver.Register(dbresolver.Config{
		Sources:           []gorm.Dialector{ifx.Open(Ids12Config.DSN)},  // how to apply gorm.Config with SkipDefaultTransaction = 'true' to informix only?
		TraceResolverMode: true,
	}).Register(dbresolver.Config{
		Sources:           []gorm.Dialector{sqlserver.Open(EdisonConfig.DSN)},
		TraceResolverMode: true,
	})
	mainDB.Use(resolver)
	return mainDB, nil
}

For Read/Write splitting, allow logging of which DSN each query goes to

Describe the feature

I am using the Read/Write splitting feature. I would like to log for each query which DSN it goes to (or at least if it goes to a source or replica).

Motivation

This would be used for troubleshooting failed or slow DB instances, and for testing.

Related Issues

N/A.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.