Git Product home page Git Product logo

rel's Introduction

REL

GoDoc Build Status Go Report Card Maintainability Codecov Gitter chat

Modern Database Access Layer for Golang.

REL is golang orm-ish database layer for layered architecture. It's testable and comes with its own test library. REL also features extendable query builder that allows you to write query using builder or plain sql.

Features

  • Testable repository with builtin reltest package.
  • Seamless nested transactions.
  • Elegant, yet extendable query builder with mix of syntactic sugar.
  • Supports Eager loading.
  • Composite Primary Key.
  • Multi adapter.
  • Soft Deletion.
  • Pagination.
  • Schema Migration.

Install

go get github.com/go-rel/rel

Getting Started

Examples

License

Released under the MIT License

rel's People

Contributors

01101101m avatar ahsanulks avatar aligator avatar appleboy avatar deepsource-autofix[bot] avatar dependabot[bot] avatar dranikpg avatar dyxushuai avatar enkiprobo avatar feber avatar fifsky avatar fs02 avatar h4ckm03d avatar i-pva avatar kataras avatar kevinalfianto avatar kmpolakowski avatar lafriks avatar mopemope avatar nalxnet avatar nosugarcoffee avatar raihannurr avatar skolodyazhnyy avatar snovichkov avatar wakamenod avatar xpol avatar youpy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rel's Issues

Cascading does not work when updating an entity without changes

Hey,

I'm having some troubles cascading updates using rel. I have a database entity with one-to-many relation:

type User struct {
    ID string
    Name string
    Score []Score
    TotalScore int
}

type Score struct {
    ID string
    Name string
    Value int
}

What I'm trying to do is to update user's Score field, as well as TotalScore field. The code looks something like this:

user.Scores = scores
user.TotalScore = total

repo.Update(ctx, user, rel.Cascade(true))

This works sometimes 😅

What happens is, if TotalScore has changed User entity will be updated and Scores updates will be cascaded too. But, if TotalScore value has not changed, the transaction gets rolled back and Scores never updated. It happens due to updatedCount check in repository.go:

	if !mutation.IsMutatesEmpty() {
		var (
			query = r.withDefaultScope(doc.data, Build(doc.Table(), filter, mutation.Unscoped))
		)

		if updatedCount, err := cw.adapter.Update(cw.ctx, query, mutation.Mutates); err != nil {
			return mutation.ErrorFunc.transform(err)
		} else if updatedCount == 0 {
			return NotFoundError{}  // RETURNS HERE
		}

		if mutation.Reload {
			if err := r.find(cw, doc, query); err != nil {
				return err
			}
		}
	}

	if mutation.Cascade { // NEVER REACHED
		if err := r.saveHasOne(cw, doc, &mutation); err != nil {
			return err
		}

		if err := r.saveHasMany(cw, doc, &mutation, false); err != nil {
			return err
		}
	}

Repository returns an error when entity hasn't change, so cascading is not happening.

As a workaround I've added updated_at field which creates "fake" change in User entity always. But I think rel should be propagating changes even if "root" entity didn't change. In general, I'm not sure if it's "error" situation when update was no-op.

Automatic schema generation?

Are you planning to implement automatic schema generation / migration?

adapter.Migrate(&app.User{}, &app.Book{}) // etc

Iterator + Preload ?

Hi, I have this scenario where I need to load millions of record and its association. Currently I'm using preload after .Next() of Iterator, and I'm afraid I'd fall into n+1 query. Is there a better workaround, or is it inevitable?

Has column alias in rel.Select

I like the Select way for writing queries, it provides a good DSL, so I use it to write a semi-complex query instead of the SQL function.

But it seems it doesn't handle the column alias properly:

https://github.com/Fs02/rel/blob/3af9e733867dc779d166d97e05a6cc91b369cd13/adapter/sql/util.go#L27

My code looks like:

query := Select(col+"col", "round((count(*)::float / (select count(*) / 100 from apartments))::numeric, 0) ratio").
.....

And generate SQL is:

SELECT "beds as col",roun .....

So is there a chance you make the Escape function alias-aware?

FindAll panics with array poiner type

if you call rel.FindAll and provide it with variable with type []*Test it will panic. While you can provide both Test and *Test for rel.Find and it will work just fine. So currently this behavior is inconsistent.

FindAndCountAll generates wrong query for count

When selecting total count soft-deleted record condition is not added to count select query.

From log:

time="2021-10-19T17:08:05+03:00" level=info msg=Query args="[false]" fields.time="560.007µs" pid=680 rowCount=1 sql="SELECT * FROM \"clients\" WHERE \"deleted\"=$1 LIMIT 20;"
time="2021-10-19T17:08:05+03:00" level=info msg=Query args="[]" fields.time="420.81µs" pid=680 rowCount=1 sql="SELECT count(*) AS result FROM \"clients\";"

Soft-delete query condition should prefix column with table name

If using rel to query record:

group := &store.DeviceGroup{}
err := s.db.Find(ctx, group, rel.Join("clients").Where(rel.And(rel.Eq("device_groups.id", id), rel.Eq("clients.id", clientID))))

and both tables have soft-delete columns rel would generate query that will fail with error:

time="2021-10-19T19:52:09+03:00" level=error msg=Query args="[2 1dfb4a1b-78c7-45a6-bc02-31d05906f111 false]" err="ERROR: column reference \"deleted\" is ambiguous (SQLSTATE 42702)" pid=1082 sql="SELECT * FROM \"device_groups\" JOIN \"clients\" ON \"device_groups\".\"client_id\"=\"clients\".\"id\" WHERE (\"device_groups\".\"id\"=$1 AND \"clients\".\"id\"=$2 AND \"deleted\"=$3) LIMIT 1;"

soft-delete condition should be added with table name ex. AND "device_groups"."deleted"=$3

Non-magical structset

Hey,

Great library, very nice and clean. One thing I find a bit inconvenient is default Structset mutator, which treats primary, created_at, inserted_at and updated_at fields differently. I would like to explicitly set created_at field when I'm creating record as well as updated_at when I'm updating it, it's not that much code really but gives you full control on how things are executed. Also, I would like to manually generate UUID (primary key) for a record before inserting it, but default Structset mutation does not allow to insert primary key, it simply excludes it (I assume because it expects database to generate primary key using auto-increment or something).

It would be great to have "non-magical" version of Structset which would insert/update data as is without any assumptions.

For the moment as workaround I use combination of Structset and Set mutators:

orm.Insert(ctx, record, rel.NewStructset(record, false), rel.Set("id", record.ID))

A bit verbose but does the job.

CreatedAt set to current date if empty when Update

var e = employee{Name: "Wael"}

if err = repo.Insert(context.Background(), &e); err != nil {
    panic(err)
}

fmt.Println("employee ", e, "created")

var e2 = employee{Name: "Abbas"}
e2.ID = e.ID
time.Sleep(60 * time.Second)

if err = repo.Update(context.Background(), &e2); err != nil {
    panic(err)
}
fmt.Println("employee ", e2, "updated")

Found:
e2.CreatedAt gets updated:

2020/09/24 23:43:53 [duration: 73.43047ms op: adapter-query] INSERT INTO "employees" ("created_at","updated_at","name") VALUES ($1,$2,$3) RETURNING "id";
employee  {4 Wael 2020-09-24 23:43:53 +0300 +03 2020-09-24 23:43:53 +0300 +03} created
employee  {4 Abbas 2020-09-24 23:44:53 +0300 +03 2020-09-24 23:44:53 +0300 +03} updated
2020/09/24 23:44:53 [duration: 14.41539ms op: adapter-exec] UPDATE "employees" SET "id"=$1,"name"=$2,"created_at"=$3,"updated_at"=$4 WHERE "id"=$5;

Expected:
e2.CreatedAt should kept unchanged (2020-09-24 23:43:53 +0300)

full source code: https://gist.github.com/mhewedy/a90cd7946907e9aabd04c873cb326c34


Suggestion:
I would suggest to check for the statement type here:
https://github.com/Fs02/rel/blob/c7b126f33cb90e773956ad66ee10a4fa5cdc6660/structset.go#L31
If it is insert then execute the CreatedAt logic, otherwise skip.

What do you think?

Add schema support

Currently there is no way at least for migrations to create/drop schemas for databases that support them like PostgreSQL

Preload as query

Assume we have the following struct:

type Author struct {
    ID int
    Name String
}

type Article struct  {
    ID int
    Title string
    Content string
    Author Author
}

Normally we have to call two queries to get article and its author:

repo.Find(ctx, &article, where.Eq("id", 1)
repo.Preload(ctx, &article, "author")

This issue propose query that trigger auto preload like this:

repo.Find(ctx, &article, where.Eq("id", 1), rel.Preload("author"))
// or
repo.Find(ctx, &article, rel.Select().Where(where.Eq("id", 1)).Preload("author"))

Error handling?

How can I check what type of error was returned?
Using database/sql or anything similar I could simply do:

err := ...
switch {
	errors.Is(err, db.ErrNoMoreRows):
		// handle for ErrNoMoreRows
	errors.Is(err, db.ErrForeignKeyConstraint): // psuedo error, doesn't actually exist
		// handle for ErrForeignKeyConstraint
	default:
		// handle default
}

How can I apply the same pattern here?

Add option for filtered/conditional index option

Add option that implements rel.KeyOption for index WHERE condition.

As far as I know only MySQL does not support this.

PostgreSQL: https://www.postgresql.org/docs/9.1/sql-createindex.html
SQLite3: https://www.sqlite.org/partialindex.html
MSSQL: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-index-transact-sql?view=sql-server-ver15

so that it could be specified for example:

t.Unique([]string{"column"}, rel.KeyCondition("NOT deleted"))

That would generate:

CREATE UNIQUE INDEX ON table (column) WHERE NOT deleted;

technically this could be easily done by simple type but I don't know how to implement this conditionally so not to break MySQL provider that does not support this

// KeyCondition filter option for unique key.
type KeyCondition string

func (kc KeyCondition) applyKey(key *Key) {
	key.Options += fmt.Sprintf(" WHERE (%s)", kc)
}

Cascade delete support?

hi!

just found your project on reddit and I'm very interested in trying it out. i've used gorm before but couldn't get myself to like it, this seems like a viable alternative.

as for my question, does rel support cascade delete as of this point?

Execute arbitrary sql statements

I have an update statement that looks like:

update matches m
    set viewed = true
    where m.request_id in (
        select r.id
        from requests r
                    join users u on r.user_id = u.id
        where u.username = $1)

And I need to execute via rel but I couldn't. ( and I don't want to use rel dsl here 😄 )

If it is a select query, I could use SQL function and pass it to FindAll function.
I tried the Update function but doesn't seem to work.

I had to use the underlaying *sql/DB Exec method.

Preload Breaking in 0.21

Hi, I was trying to upgrade from 0.10 to latest but found an issue introduced in 0.21.

Using prior version we can do the following

type Discussion struct {
	ID          int                  
	ComplaintID *int                 
	Complaint   *DiscussionComplaint `autosave:"true"`
}

// discussion has zero or one complaint
type DiscussionComplaint struct {
	ID         int
}

// Table to override table name
func (dc DiscussionComplaint) Table() string {
	return "complaints"
}

func main() {
	var discussions []Discussion
	// on 0.20 we can do the following
	complaintFilter := rel.Gte("complaint_id", start).And(rel.Lte("complaint_id", finish))
	pg.FindAll(context.Background(), &discussions, rel.Preload("complaint"), complaintFilter)
	for _, discussion := range discussions {
		fmt.Println("discussion id", discussion.ID)
		fmt.Println("discussion.complaint_id", discussion.ComplaintID)
		fmt.Println("discussion.complaint.id", discussion.Complaint.ID)
	}
}

starting from 0.21 it will raise an error
panic: value method main.DiscussionComplaint.Table called using nil *DiscussionComplaint pointer.

Do we need to add some kind of adjustment to be able to upgrade to 0.21? Thanks

Associated id is save automatically?

Hi!.

I'm having a little problem to save an associated id. I don't know if I'm doing it well. Let me explain what it is happen to me.

I have this db struct:

-- authors definition

CREATE TABLE authors (
	id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
	name TEXT
);

-- books definition

CREATE TABLE books (
	id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
	title TEXT,
	category TEXT,
	price INTEGER,
	discount NUMERIC,
	stock INTEGER,
	author_id INTEGER NOT NULL,
	FOREIGN KEY(author_id) REFERENCES authors(id)
);

These are my go models:

type Author struct {
	ID   int32
	Name string
}

type Book struct {
	ID       int32
	Title    string
	Category string
	Price    int
	Discount bool
	Stock    int
	Author   Author `ref:"author_id" fk:"id" autosave:"true"`
	AuthorID int32
}

I want to save an Author instance and pass it to a Book instance. I was expecting that the relation would save AuthorID automatically but no. It shows this error:

panic: ForeignKeyConstraintError: rel: inconsistent belongs to ref and fk

This is the code I'm running:

const dbPath = "./go-rel-example.db"

func main() {
	adapter, err := sqlite3.Open(dbPath)
	if err != nil {
		panic(err)
	}

	defer adapter.Close()

	repo := rel.New(adapter)

	err = Example(context.Background(), repo)
	if err != nil {
		panic(err)
	}

}

func Example(ctx context.Context, repo rel.Repository) error {

	return repo.Transaction(ctx, func(ctx context.Context) error {
		author := Author{
			Name: "Anne Paterson",
		}

		err := repo.Insert(ctx, &author)
		if err != nil {
			return err
		}
		log.Println(author)

		book := Book{
			Title:    "A new story",
			Category: "horror",
			Price:    10,
			Discount: false,
			Stock:    2,
			Author:   author,
		}
		err = repo.Insert(ctx, &book)
		if err != nil {
			return err
		}

		log.Println(book)

		log.Println("-----DONE-----")

		return nil
	})

}

Now my questing here is if this can be done. I don have any problem to pass the id to the association, but I thing would be great to pass the saved instance and rel takes the associated id to save it.

Thanks!!! 🏆 REL is awesome! 😃

postgres error when inserting slice value

I have a struct with data
type Example struct { RelatedIDs []int }

when inserting on database. I have error with message sql: converting argument $8 type: unsupported type []int, a slice of int. But postgres is already support data type with array. I think it miss implementation on build the query if the data is slice on rel

Support association using join table such as Many to Many

Idea 2

Example

type Channel struct {
	ID   int
	Name string

	// mapped to singular version of field name defined in "db" (subscriber) inside through association.
	// impicitly trigger two preload: Preload("subscriptions") and Preload("subscriptions.subscriber")
	// the result than flattened and mapped to subscribers.
	Subscribers   []User         `db:"subscribers" through:subscriptions"`
	Subscriptions []Subscription `ref:"id" fk:"channel_d"`
}

type User struct {
	ID   int
	Name string

	Subscriptions []Subscription
	Channels      []Channel `through:subscriptions"`

	// self-referencing needs two intermediate reference to be set up.
	// trigger Preload("user_followings") and Preload("user_followings.following")
	Followings     []User   `through:"user_followings"` // map to following field
	UserFollowings []Follow `ref:"id" fk:"follower_id"`

	// trigger Preload("user_followers") and Preload("user_follwers.follower")
	UserFollowers []Follow `ref:"id" fk:"following_id"`
	Followers     []User   `through:"user_followers"` // map to follower field
}

type Follow struct {
	Follower    User
	FollowerID  int `db:",primary"`
	Following   User
	FollowingID int  `db:",primary"`
	Accepted    bool // this way, it may contains additional data
}

type Subscription struct {
	ID           int `db:",primary"`
	Subscriber   User
	SubscriberID int
	Channel      Channel
	ChannelID    int
	CreatedAt    time.Time
	UpdatedAt    time.Time
}

Specifications

  • Join association is defined using tag calledthrough and doesn't support nested through.
  • It's a read only association (all modification will be ignored)
  • Every preload of has through assoc will implicitly trigger two preload, the first one is the association defined by through tag, the second one is association inside through field that has the singular name as has through field. The result then flattened and mapped to the final association.
  • Preload has many through
  • Preload has one through

Merit/Demerit

(+) Support has one and has many through
(+) Can be made read only and still accessible for update.
(+) We can have metadata on join table.
(-) Require additional association defined (especially verbose for self referencing association).

Idea 1

Example:

// Table subscription_users: user_id(int), subscription_id(int)
// Table followers: followed_id(int), following_id(int)

type Subscription struct {
	ID   int
	Name string

	// 1. basic declaration:
	// subscription:id <- subscription_id:subscription_users:user_id -> user:id
	Users []User `ref:"id:subscription_id" fk:"id:user_id" through:"subscription_users"`
}

type User struct {
	ID   int
	Name string

	// 2. back ref
	//    user:id <- user_id:subscription_users:subscription_id -> subscriptions:id
	Subscriptions []Subscription `ref:"id:user_id" fk:"id:subscription_id" through:"subscription_users"`

	// 3. omitting ref and fk tag, will be guessed based on table name:
	//    user:id <- user_id:subscription_users:subscription_id -> subscriptions:id (the same as 2)
	// Subscriptions []Subscription through:"subscription_users"`

	// 4. Self-referencing many to many
	Followers  []User `ref:"id:following_id" fk:"id:follower_id" through:"followers"`
	Followings []User `ref:"id:follower_id" fk:"id:following_id" through:"followers"`
}

Specifications

  • Join table is defined using tag calledthrough.
  • subscription:id <- subscription_id:subscription_users:user_id -> user:id
    is declared as:
    ref:"id:subscription_id" fk:"id:user_id" through:"subscription_users"
  • ref and fk can be completely omitted, and will be guessed as many to many when through tag is available, otherwise has many rule applied.
  • Preloading support.
  • Insert/Update support Edit: to complex and to magic to implement

Merit/Demerit

(+) Can implement many to many without additional struct.
(-) Doesn't work for has one through.
(-) Can't be made fully readonly(can update but only the join data).

Rows Modified Support

Hey there , just wanted to say nice job on this package , it solves a lot of my concerns having a layered architecture . :)

I just wanted to ask is there any way to get the RowsModified from a sql query , i know gorm has it as

rowsAffected := db.
                            Where("field=?",someVal).
                            Where("otherField=?", somVal2).
                            Update(&record).
                            RowsAffected

Soft deletion still has issue

Hi, i tried to use soft deletion to see if it's work
but somehow i found this still has a bug especially on mysql and mariadb

here some example while inserting on mysql in docker
image

it's inserted '0000-00-00' value but querying with deleted_at IS NULL, so basically it will always return empty since '0000-00-00' is considered as not null value
i knew we could avoid this using rel.Unscoped(true), but this will also break autoload query
i tried 2 scenario using association with 2 table, 1st product and the 2nd is category using 1-1

scenario 1:
product table without deleted_at, category table using deleted_at, like i said before the autoload would return empty category while querying from product(IS NULL problem while querying category)
scenario 2:
product and category table had deleted_at field, i couldn't get the product except i'm using unscoped, but if i'm using it, the autoload doesn't work

here the example on mariadb in xampp
image

on mysql insert still had a problem i'm not sure why it happened, but it's different on mariadb the insert still work perfectly fine
so basically to make this work maybe we should change the query to SELECT * FROM 'categories' WHERE deleted_at = '0000-00-00 00:00:00';
or we could just add OR at where condition

Add support for database functions

Would be nice to have ability to specify some common functions in filters and default values.

At least few I can think of:

  • Current date and time (current_timestamp, current_date, current_time)
    • MSSQL - getdate(), convert (date, getdate()), convert (time, getdate())
  • upper/lower - same for all currently supported
  • substring
    • PostgreSQL - substring ( string text [ FROM start integer ] [ FOR count integer ] )
    • MSSQL, sqlite3, MySQL - substring ( expression, start, length )
  • nullif/coalesce - same for all currently supported
  • least/greatest
    • sqlite3 - min, max
    • MSSQL, PostgreSQL, MySQL - least, greatest

Probably some others and have adapter to build them as on different databases this can be different

sql: converting argument $4 type: unsupported type XXX, a struct

Hi. Very nice project, but im have some problems, can you help me. Just see code below:


import (
	"context"
	"log"
	"time"

	"github.com/go-rel/rel"
	"github.com/go-rel/rel/adapter/postgres"
	_ "github.com/lib/pq"
)

type Status struct {
	Verified bool
	Archive  bool
	UserID   int
}
type User struct {
	ID        int
	Email     string
	Password  string
	JoinedAt  time.Time
	LastLogin time.Time
	Status    *Status
}

func main() {
	dsn := "postgres://postgres:[email protected]:5432/store?sslmode=disable"
	adapter, err := postgres.Open(dsn)
	if err != nil {
		log.Println("Error:", err)
		panic(err)
	}
	defer adapter.Close()

	rep := rel.New(adapter)

	user := User{
		Email:    "[email protected]",
		Password: "verySecret42",
		Status:   &Status{Verified: false, Archive: false},
	}

	rep.Insert(context.Background(), &user)
}

After run this code im gets error: sql: converting argument $4 type: unsupported type main.Status, a struct what's wrong in code?

Log queries ?

Hi , Thanks for this perfect tool , sure it will get many stars soon.
Is there any way to use the logger or add custom logging ...?
Thank you.

Add support for JSON data type

Especially that would be useful in migrations when defining table columns etc.

PostgreSQL and MySQL has json data types. SQLite can fallback to text

Migration Error

Hi guys

I receive the following error if I want to migrate some changes to an already migrated database:
Error: adapter-exec Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ';parseTime=True' at line 1

Reproduce:

  1. Start database in docker-compose
version: '3'
services:
  mariadb:
    image: mariadb:latest
    ports:
      - 3398:3306
    environment:
      - MYSQL_ROOT_PASSWORD=test
      - MYSQL_USER=test
      - MYSQL_DATABASE=test
      - MYSQL_PASSWORD=test
    volumes:
      - dbdata:/var/lib/mysql
volumes:
  dbdata:
  1. Create the following env-variables:
MYSQL_HOST=localhost
MYSQL_PORT=3398
MYSQL_USERNAME=test
MYSQL_PASSWORD=test
MYSQL_DATABASE=test
  1. Create a new table in a migrations file
  2. Start migration with rel migrate -adapter=github.com/go-rel/rel/adapter/mysql -driver=github.com/go-sql-driver/mysql

Now the Step 4 will fail.
Has anyone an idea?
Many thanks

godoc on struct methods not on interface methods

I am starting using rel in some side project, Mostly I like it. (I have some catches, might open issues for later).

But for now, I want to mention what important thing and that is the godc doesn't appear in the IDE (I use IntelliJ with Goland plugin)

This is because the documentation is on the struct method no on the Repository interface methods, which make it harder to learn about the individual methods.

This popup shows no godoc here:
Screen Shot 2020-09-04 at 3 17 58 AM

I have to navigate to the repository struct impl:
Screen Shot 2020-09-04 at 3 18 09 AM

And here the only way to find the godoc:
Screen Shot 2020-09-04 at 3 18 15 AM

Object Mapper design/functionality?

Thank you for the effort you have put into this project and making it available as open source.

I'm new to Go and am looking for an Object Mapper library in Golang - I'm struggling to see past the sytax right now - hence this question.

Essentially Object Mapper approach builds on the lessons learned from the AvtiveRecord and DataMapper patterns/conventions, as well as the implementation lessons from the like named projects.

Something with the design and functionality of rom-rb.

Does your library a Object Mapper type functionality?
If not would you kind enough to share the name of any OM libraries that your know of in Goland?

Multiple connection support

It would be nice to have support to provide multiple connections with role write or read-only so that it would possible to use for example with PostgreSQL cluster that has single master and multiple read only nodes. Rel would then send write operations to master and select operations to one of slaves

[question] repo.Instrumentation how to passed on context?

I am using go.opentelemetry.io/otel to record trace, but i can't be passed on context

// database trace
repo.Instrumentation(func(ctx context.Context, op string, message string) func(err error) {
span := trace.SpanFromContext(ctx)

if span.IsRecording()  {  //  Only the first transmission from the external can be recorded
	ctx, span = dbTracer.Start( // The ctx here can't be passed on ??
		ctx,
		op,
		trace.WithSpanKind(trace.SpanKindServer),
	)

	span.SetAttributes(
		attribute.String("db.sql", message),
	)
}

return func(err error) {
	if err != nil {
		span.RecordError(err)
		span.SetStatus(codes.Error, err.Error())
	}
	span.End()
	log.Printf("[SQL] operation:%s query:%s", op, message)
}
})

Maybe reutrn ctx, func(err error)

How to work with insert operations, relations have any examples?

Hello, I'm learned documentation but not understand how it work insert operation, see down my short example.

// we have next migrations
func MigrateCreateProfile(schema *rel.Schema) {
	schema.CreateTable("profile", func(t *rel.Table) {
		t.ID("id")
		t.String("middle_name")
		t.String("first_name")
		t.String("last_name")
	})
}

func MigrateCreateUser(schema *rel.Schema) {
	schema.CreateTable("users", func(t *rel.Table) {
		t.ID("id")
		t.String("email", rel.Unique(true), rel.Required(true))
		t.String("phone", rel.Unique(true))
		t.String("password", rel.Required(true))
		t.DateTime("joined_at", rel.Default(time.Now()))
		t.Int("profile", rel.Unique(true))

		t.ForeignKey("profile", "profile", "id")
	})
}

// and next code for mapping tables to struct 
type User struct {
	ID       int
	Email    string
	Phone    string
	Password string
	JoinedAt time.Time
	Profile  *Profile `auto:"true"`
}

type Profile struct {
	ID         int
	FirstName  string
	MiddleName string
	LastName   string
}
// and next code for insert user into table.
func Save(ctx context.Context, record interface{}) error {
	fmt.Printf("%#v", record)
	return r.repository.Transaction(ctx, func(ctx context.Context) error {
		return r.repository.Insert(ctx, record)
	})
}

From docs after this code must created new record in Profile table and new record in Users, what's wrong in my code.

May be do you have any example code, about relation and CRUD operations using go-rel?

Support extended types

Currently if you have type for table:

type Test struct {
  ID int64,
  Name string
}

func (t Test) Table() {
  return "test"
}

and you want to extend it to select additional field from other table with it, it's not possible to create type:

type TestExt struct {
  Test
  CustomField string
}

When you use TestExt in rel.Find it will panic with error rel: references (id) field not found

If function rel.extractDocumentData it will find two fields test and custom_field while it should find id, name and custom_field

Problem migrating

Hello,

I am attempting to run rel migrate on a blank, fresh database. If you observe the screenshot below you'll see I have a migration file, but I am getting an error:

go: updates to go.sum needed, disabled by -mod=readonly

This is in a simple, test project with a freshly initialized go.mod. Any thoughts? Thanks!

Screen Shot 2020-10-14 at 11 32 00 AM

Dump and Load Database Schema Migration

Background

If you've used rails in the past, chances are you've seen this db/schema.rb file. It's a snapshot of current database schema that written in ruby DSL. The dumped schema is database agnostic, and can be used again to restore database schema, something that very useful in CI environment, where running hundreds of migrations one by one for every test build will consume a lot of time. Not only that, this file is also useful during peer review, to see what's the result of migration.

In REL, we would like to have something similar, but in go. It's potential usage is not limited to loading schema in CI environment (which we don't need anyway given we have reltest package 😏 ), but also the following:

  • Validate record modification in reltest package. eg: making sure Set("name", "your name") is properly setting to an existing field in database with the correct type.
  • Validate query in reltest package. eg: making sure where.Eq("name", "your name") is a valid query against a valid field with valid type and operator.
  • To implement auto schema migration generator #49. The snapshot of database can be compared to modification of the related struct, and a proper migration can be generated based on that. eg: https://alembic.sqlalchemy.org/en/latest/autogenerate.html
  • As a foundation for type safe query builder generator, that can extend REL's query builder.

Implementation

Adapter Updates

Two new function needs to be added on each adapter to support this feature.
Implementation of each database might be completely different, so we may not be able to reuse Load and Dump method across all adapter.

type Adapter interface {
        // ...

        // load schema back to database.
        // Probably will call Apply, after setting some variable (ex: disabling foreign key check)
	Load(ctx context.Context, schema Schema) error

        // Dump database schema as plain go struct.
	Dump(ctx context.Context) (Schema, error)
}

Dump Result

// file: db/schema.go

package db

// TBD

CLI

  • rel dump - Dump database schema to db/schema.go.
  • rel load - Load db/schema.go to database.
  • rel migrate and rel rollback - Modification for this to always call dump after command executed.

Tasks

  • SQLite3 Adapter Dump and Load function.
  • MySQL Adapter Dump and Load function.
  • Postgres Adapter Dump and Load function.
  • Migrator function to translate Adapter dump function result to golang code in db/schema.go as migration DSL.
  • rel dump command.
  • rel load command.
  • rel migrate and rel rollback modification.

[Question] What is the correct way to soft delete all records?

I guess DeleteAll is not, since the second param of the r.deleteAll is Invalid.

func (r repository) DeleteAll(ctx context.Context, query Query) (int, error) {
	finish := r.instrumenter.Observe(ctx, "rel-delete-all", "deleting multiple records")
	defer finish(nil)

	var (
		cw = fetchContext(ctx, r.rootAdapter)
	)

	return r.deleteAll(cw, Invalid, query)
}

Sorry if I miss the doc.

Thanks in advance.

Proposal: API support context.Context

The purpose of carrying the context around would be to have it available in callback functions or execution timeout, where it could be used to enhance logging, trace execution times (with opentracing etc), or perhaps carry some authentication context or user id to the callbacks.

Optimistic Locking

Background

Optimistic locking (or optimistic concurrency control) is a technique that allows concurrent edits on a single record (more: https://hexdocs.pm/ecto/0.9.0/Ecto.Model.OptimisticLock.html)

Implementation

  • Detect Optimistic Locking enabled when a struct contains a LockVersion int field.
  • Use LockVersion field whenever Update or Delete is performed to prevent operation on stale version of record. Maybe we can simply add additional where filter LockVersion=? on update or delete query.
  • Differentiate error when updating/deleting stale record for non existent record.

Error when cascade deleting record with composite keys

It seems delete query is not properly constructed when cascade deleting one-to-many associations.

The query I get looks like this:

DELETE FROM `payment_transaction` WHERE (`payment_id`=? AND );

Here is code which produces invalid query:

type Payment struct {
	ID           string `db:"id"`
	Transactions []Transaction   `rel:"id" fk:"payment_id"`
}

type Transaction struct {
	RequestID string `db:"request_id,primary"`
	PaymentID string `db:"payment_id,primary"`
	Name      string `db:"name"`
}

payment := &Payment{
	ID: "some-uuid",
	Transactions: []Transaction{},
}

err := repo.Delete(ctx, payment, true)

I believe the problem is here: https://github.com/go-rel/rel/blob/master/repository.go#L879, in case associations slice is empty filterCollection returns an empty rel.Eq.

[Subquery?] How to query user in any of groups.

table User {
  id int
  name varchar(32)
}

table GroupUser struct {
  user_id int
  group_id int
}

table Group {
  id int
  name varchar(32)
}

How to find unique user in groups [1,2,3] ?
Follow code does not work:

groups := []int{1,2,3}
subQuery := rel.From("group_users").Select("DISTINCT user_id").Where(where.InInt("group_id", groups))
repo.FindAll(ctx, &users, where.In("id",  subQuery))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.