seaql / sea-query Goto Github PK
View Code? Open in Web Editor NEW๐ฑ A dynamic SQL query builder for MySQL, Postgres and SQLite
Home Page: https://www.sea-ql.org
License: Other
๐ฑ A dynamic SQL query builder for MySQL, Postgres and SQLite
Home Page: https://www.sea-ql.org
License: Other
I have a chrono::NaiveDateTime value, which would translate to a TIMESTAMP in postgres as shown here: https://docs.rs/postgres/0.19.0/postgres/types/trait.ToSql.html#types
I'm having trouble understanding how I would convert it to a sea_query::Value. Alternatively I could switch to SystemTime, but that's not really gonna help me here.
I'm currently trying to insert into a table, and I'm hitting this panic:
Line 164 in 7a2331a
To me it sounds like at that line, the Object should be converted to a Value::Bytes using serde_json::to_vec. I am actually not sure how that would interact with postgres' jsonb type, I think you have to create jsonb values like '{"field":"value"}'::jsonb
. Which is a complicated problem now since this should only happen on json_binary() columns, while normal json columns should use ::json
instead...
Expr::cust("3 + 2 * 2")
To prevent SQL injection, we'd also need a custom expression with parameter bindings:
Expr::cust_with_values("? + ? * ?", vec![3, 2, 2])
Hello!
Most of the time you want to express a column name relative to the parent. You can co that with Expr::tbl
but it does create a lot of unnecessary boilerplate since we already know the table in the enum.
My suggestion would be to add an option to Iden that adds the name of the table automatically so you can use Expr::col
and not worry about it. Since this would be totally optional, we could decide to not use it if you don't want to.
I can implement it if this is approved by the maintainer :)
Thanks
Currently there are no way to read sea-query
struct directly. Providing readonly access to struct will be necessary to support external crate such as sea-orm
.
However, this readonly access expose internal data structure to external crate. And internal data structure can be changed over time so external create depending on this readonly access have to understand the risk.
#[derive(Debug, Clone, ReadOnly)]
pub struct TableCreateStatement {
pub(crate) table: Option<Rc<dyn Iden>>,
pub(crate) columns: Vec<ColumnDef>,
pub(crate) options: Vec<TableOpt>,
pub(crate) partitions: Vec<TablePartition>,
pub(crate) indexes: Vec<IndexCreateStatement>,
pub(crate) foreign_keys: Vec<ForeignKeyCreateStatement>,
pub(crate) if_not_exists: bool,
}
Will generate...
#[derive(Debug, Clone)]
pub struct ReadOnlyTableCreateStatement {
pub table: Option<Rc<dyn Iden>>,
pub columns: Vec<ColumnDef>,
pub options: Vec<TableOpt>,
pub partitions: Vec<TablePartition>,
pub indexes: Vec<IndexCreateStatement>,
pub foreign_keys: Vec<ForeignKeyCreateStatement>,
pub if_not_exists: bool,
}
Ref: https://internals.rust-lang.org/t/pre-rfc-read-only-visibility/11280
Given the following function in a tokio runtime:
async fn update(
db: &DatabaseConnection,
active_model: model::ActiveModel,
) -> Result<bool, Box<dyn std::error::Error>> {
Entity::update(active_model)
.exec(db)
.await
.map_err(Box::new)?;
Ok(true)
}
An error is thrown due to SeaRc
not implementing Send
.
error: future cannot be sent between threads safely
|
76 | ) -> Result<bool, Box<dyn std::error::Error>> {
| ___________________________________________________^
77 | | use models::coupons::Entity as Coupon;
78 | |
79 | | let res = Coupon::update(coupon_update)
... |
84 | | Ok(true)
85 | | }
| |_____^ future created by async block is not `Send`
|
= help: within `TableRef`, the trait `std::marker::Send` is not implemented for `SeaRc<(dyn Iden + 'static)>`
pub use std::rc::Rc as SeaRc;
should be updated to use Arc
instead of Rc
, either perminently or through a feature flag.
Is this a good idea to add func join_cond()
which accept Condtion
instead of SimpleExpr
?
Currently sea-query has function join
defined as below
pub fn join<R>(
&mut self,
join: JoinType,
tbl_ref: R,
condition: SimpleExpr
) -> &mut Self
which means it can construct query such as
SELECT a.id, b.id FROM a
LEFT JOIN b ON a.time=b.time
WHERE a.cost > 0;
but if we want construct query as
SELECT a.id, b.id FROM a
LEFT JOIN b ON ( a.time=b.time AND a.cost=b.cost )
WHERE a.cost > 0;
maybe SimpleExpr
cannot explain like the ? ( except using Expr::cust()
)
Supporting Decimal
from rust_decimal
in Value
Reference: #24
Currently, the way to write conditions is too close to SQL, yet different. To express WHERE a AND b
, there are two possibilities:
.and_where(a).and_where(b)
or .or_where(a).and_where(b)
.
When we add more operations, things get messy:
.and_where(a).or_where(b).and_where(c)
actually resolves differently depending on whether it's on a Select or a Delete/Update.
WHERE a OR (b AND c)
due to the higher precedence of AND
vs OR
.WHERE (a OR b) AND c
because of enforced left-associativity.In both cases, the semantics are not clear. Instead of AND
and OR
, the syntax should be based on any
and all
, and then resolved by the builder with the correct semantics. Something along these lines:
Query::delete()
.from(Char::Table)
.where(
condition::any()
.add(a)
.add(
condition::all()
.add(b)
.add(c)))
This would resolve to:
DELETE FROM char WHERE a OR (b AND c);
We can also add the macros any!
and all!
to get:
.where(any![a, all![b, c]])
Regarding implementation, the existing and_where
can be marked deprecated, and implemented as an Either
with the any/all implementation, with a panic if both are used. Then with a major version bump we can remove the and_where
.
WDYT?
It would be nice to have the expected mappings from database types to Value types, similar to sqlx. It's been challenging figuring out which Value
variant I need to construct when setting a default value for a column in a table schema based on a dynamic column type, especially since there isn't a one-to-one mapping of ColumnType
to Value.
So as to remove sqlx as dev-dependencies
Review all panics in sea-query, which we should avoid panic if possible, and document accordingly.
The following example:
#[derive(Debug, sea_query::Iden)]
pub enum Character {
Table,
Id,
Character,
FontSize,
SizeW,
SizeH,
FontId,
}
fn main() {
use Character::{Character as Char, FontId, FontSize, Id, SizeH, SizeW};
let stmt = sea_query::Query::insert()
.into_table(Character::Table)
.columns(vec![Id, Char, FontSize, SizeW, SizeH, FontId])
.to_string(sea_query::PostgresQueryBuilder);
println!("{}", stmt);
}
currently prints:
INSERT INTO "character" ("id", "character", "font_size", "size_w", "size_h", "font_id") VALUES
To make the usage of sea-query more straightforward while it's not yet possible to use all types with InsertStatement::values()
or InsertStatement::json()
(see #20, #21), it would be great if it would instead yield this:
INSERT INTO "character" ("id", "character", "font_size", "size_w", "size_h", "font_id") VALUES ($1, $2, $3, $4, $5, $6)
when used with PostgresQueryBuilder (for use with the postgres
or tokio-postgres
crate), or respectively this:
INSERT INTO "character" ("id", "character", "font_size", "size_w", "size_h", "font_id") VALUES (?, ?, ?, ?, ?, ?)
on mysql for use with the mysql
crate.
This could either happen automatically whenever the statement's values are empty, or explicitly using something like a InsertStatement::values_prepared()
.
This would allow easily using the resulting string with Client::prepare
in postgres, or respectively Queryable::prep
in mysql, so without much effort you'd gain the ability to use all of the native types these crates can accept, as well as the ability to integrate with their way of sanitizing inputs.
If there's interest, I could contribute something like this and open a PR, but I'd need to know which way of implementing this would be the best (implicit vs explicit).
It seems that there's completely no mention of postgres arrays. They are documented at https://www.postgresql.org/docs/9.1/arrays.html
My current work-around is something like:
pub struct PostgresArrayOf(sea_query::ColumnType);
impl sea_query::Iden for PostgresArrayOf {
fn unquoted(&self, s: &mut dyn std::fmt::Write) {
let mut sqlw = sea_query::SqlWriter::new();
sea_query::PostgresQueryBuilder.prepare_column_type(&self.0, &mut sqlw);
write!(s, "{}[]", sqlw.result()).unwrap();
}
}
// snipโฆ
ColumnDef::new(RunResult::Parameters)
.not_null()
.custom(PostgresArrayOf(sea_query::ColumnType::Double(None)))
Hi!
I could not find an easy way to specify all columns (but not star).
Maybe Iden could implement an all()
function that returns a vec?
It would look like:
Query::select().columns(MyModel::all())
Thanks!
I've noticed that the .json()
datatype on ColumnDef results in the json
datatype on the postgres side. Postgres has two json datatypes; json
(which gets validated, but then stored in plain text) and jsonb
(which gets stored in a decomposed binary format, so it is indexable and you can check for object containment and existence).
Looking at the mysql docs, it seems like the json datatype on the mysql side is the equivalent to the jsonb
type (and they don't have an equivalent to postgres json
, you're probably supposed to just use text).
Would it be an option to a) change the default ColumnType::Json
type on the postgres side to yield jsonb
, or b) change the ColumnType::Json
type on the mysql side to yield text
, and add a json_binary()
type that results in json
on mysql and jsonb
on postgres?
Sometimes the '?' character is needed to write some operators in postgres like in JSONB @? JSONPATH
.
I tried the following
Expr::cust_with_values("data @? (?::JSONPATH)", vec![filter.as_str()])
I'd like to have a way to escape the ?
in the operator to prevent sea_query from assuming that this is a argument.
This could be done by using ??
instead of a single ?
like in github.com/Masterminds/squirrel (thats go, but the design question is the same I guess)
Postgres enums are basically strings that can only have a fixed set of values; on insert they check if the value is in that set.
These are basically custom types (created with a CREATE TYPE
statement).
Currently the API of sea-query doesn't really support them; there is no way to create a CREATE TYPE statement, so I would have to manually write that, and there's also no way to reference a custom type in a ColumnDef, so I would also have to write my CREATE TABLE and ALTER TABLE statements manually whenever an enum is involved.
Are there plans to support this functionality?
In the short term, it would probably be enough to provide a .custom<T: Iden>(typ: T)
or .custom(typ: &str)
method on ColumnDef, that would allow me to still use the query builder for non-CREATE TYPE statements and would also be database-independent.
In the future, full support for CREATE TYPE/ALTER TYPE/DROP TYPE statements would also be nice, maybe behind a postgres feature.
In sqlite and postgres, it is possible to use the ON CONFLICT
clause for INSERT
statements in order to do an upsert
As an example suppose you have the following table:
CREATE TABLE customers (
customer_id serial PRIMARY KEY,
name VARCHAR UNIQUE,
email VARCHAR NOT NULL,
active bool NOT NULL DEFAULT TRUE
);
and you wish to insert without having errors and it is safe to update if it already exists in the table, this can all be handled in sql in the following way:
INSERT INTO customers (name, email)
VALUES('Microsoft','[email protected]')
ON CONFLICT (name)
DO
UPDATE SET email = EXCLUDED.email;
Note: all examples are taken directly from the upsert link (postgresqltutorial.com).
This is meant as a suggestion of what could be nice to have as an api for this, this is by no means what might end up being implemented. It might not be possible to implement as written below.
The clause in both cases has two parameters, a target
column on which a constraint is generally applied, and a action
.
use sea_query::{Query, ConflictAction};
// upsert example
Query::insert()
.into_table(Customers::Table)
.columns([Customers::Name, Customers::Email])
.on_conflict(Customers::Name, ConflictAction::Update(Query::update().set(Customers::Email))
// ignore example
Query::insert()
.into_table(Customers::Table)
.columns([Customers::Name, Customers::Email])
.on_conflict(Customers::Name, ConflictAction::Nothing)
An alternative to supporting the ON CONFLICT
clause would be to implement the UPSERT
statement directly which is supported in
But seeing as ON CONFLICT
is more powerful and seems to be the best practice in Postgresql anyways. I'm not sure this would be a viable alternative.
Seeing as ON CONFLICT
is not standard SQL this can lead to issues, namely the fact that MySql handles upserts differently. Where the ON CONFLICT(target) DO UPDATE
would actually be ON DUPLICATE KEY UPDATE
and it doesnt seem like you can add a target. The previous was the easy one, the others such as DO NOTHING
changes the statement entirely to be INSERT IGNORE ...
I'm not sure how this can be implemented cleanly.
Hey, this looks like a really cool project! The readme mentions that this is the foundation for upcoming ORM - is it going to be a high-level SQL ORM like diesel, or are you developing something else?
When creating a prepared statement in something like sqlx or tokio-postgres, I'm having trouble creating a prepared update statement which supports updating fields to be null
).
This would optimally be done with an Option<Option<T>>
type (in my case I'd actually use it with MaybeUndefined
from the async-graphql crate.
Where:
None
would not update the field at allSome(None)
would update the field to be nullSome(Some(value))
would update the field with the valueThe main issue is that the statement would include fields only where applicable.
Eg:
UPDATE foos SET a = $1, b = $2, c = $3 WHERE id = $4;
Or if a is None
, b is Some(None)
and c is Some(Some(value))
:
UPDATE foos SET b = null, c = $1 WHERE id = $2;
I looked through the sea-query docs but could not see anything which would provide this kind of functionality.
Is this supported? Or planned to be supported?
Column defaults in CREATE TABLE statements should, in addition to value literals, support expressions. For example, it's very common for a column of type uuid
to default to either gen_random_uuid()
or uuid_generate_v4()
. Ideally, ColumnSpec::Default(Value)
would contain a SimpleExpr
instead of Value
since SimpleExpr
has aValue
variant. This obviously would be a breaking change so perhaps instead consider adding ColumnSpec::DefaultExpr(SimpleExpr)
.
TableCreateStatement
's builder methods take a mutable reference to self
, whereas most others (ForeignKeyCreateStatement
, IndexCreateStatement
, ColumnDef
, etc) appear to take full ownership of self. My guess is in the typical use case you already know the structure of what's being built ahead of time such that you can have a single fluent chain of calls to builder methods. This turns out to be very unergonomic (compared to taking a mutable reference) when dynamically building up these statements as you end up with a lot of this
column_def = if column_schema.primary {
column_def.primary_key()
} else {
column_def
};
instead of simply
if column_schema.primary {
column_def.primary_key()
}
Not a huge issue, but I figured I'd raise it anyway as an opportunity to align these APIs or otherwise find out why the current design actually makes more sense.
I've been toying with the idea of making a similar Query Builder for Sqlite. But adding support to this library seems like a much better idea.
In the readme it says that sqlite support is a work in progress. What still needs to be done on that front?
In order to be more idiomatic the use of generics would be preferable to asking for Vec
as parameters.
Take for example SelectStatement::columns which has a signature as follows:
pub fn columns<T>(&mut self, cols: Vec<T>) -> &mut Self
where T: IntoColumnRef
This could become
pub fn columns<T, I>(&mut self, cols: I) -> &mut Self
where
T: IntoColumnRef,
I: IntoIterator<Item = T>
This stems from the discussion in #15.
A similar task to cond_where
Ref: c694295
Using sea_query
version 0.8
and tokio_postgres
version 0.7
.
let (statement, params) = sea_query::Query::select()
.columns(vec![
iden::Model::Id,
])
.from(iden::Model::Table)
.and_where(
sea_query::Expr::tbl(
iden::Model::Table,
iden::Model::DeletedAt,
)
.is_null(),
)
.build(sea_query::PostgresQueryBuilder);
println!("{} -- {:?}", statement, params);
let statement = conn.prepare(&*statement).await?;
let rows = conn.query(&statement, &*params.as_params()).await?;
Output:
SELECT "id" FROM "models" WHERE "models"."deleted_at" IS $1 -- Values([Null])
Error: db error: ERROR: syntax error at or near "$1"
Expected output:
SELECT "id" FROM "models" WHERE "models"."deleted_at" IS NULL -- Values([])
Both Postgres and MySQL have a first-class uuid
type by now. MariaDB doesn't, so that's yet another issue of that nature.
Once #5 is merged, I can just create a UuidType
unit struct and implement Iden on it to work around this, but it would be nice to have API support for that on ColumnDef as well, at least at some point in the future.
Right now we had sqlx-mysql and sqlx-postgres and rusqlite driver support, but without sqlx-sqlite.
It would be great to have 1) driver 2) usage example for sqlx-sqlite
.
let query = Query::update()
.into_table(Glyph::Table)
.values(vec![
(Glyph::Aspect, 2.1345.into()),
(Glyph::Image, "235m".into()),
])
.and_where(Expr::col(Glyph::Id).eq(1))
.to_owned();
assert_eq!(
query.to_string(MysqlQueryBuilder),
r#"UPDATE `glyph` SET `aspect` = 2.1345, `image` = '235m' WHERE `id` = 1"#
);
@billy1624
Why we named the method into_table
before? Shall we name it simply table
?
Also, I suggest we can add a singular value()
for convenience.
Hello,
I have been testing this crate for the past several days and after some searching in the documentation and in the code I cannot find any references to range support. I have only used Sea Query with a postgres database, together with the postgres crate, which does support ranges. Additionally I used this crate.
Are there any plans to support range types? Could you point me to where would it be the best way to start supporting them in the code?
Thank you.
Currently, there's no way to specify table constraints in TableCreateStatement
s. Adding an .extra()
method similar to ColumnDef
would probably be the easiest way to add support for this.
Basically the equivalent to #[serde(flatten)]
but for the Iden
derive.
It allows us to be DRYer by using composition for common columns across many table.
For example for keeping track which user created an entry and when:
#[derive(Iden)]
enum Creation {
CreatedBy,
CreationDate,
}
#[derive(Iden)]
enum Product {
Name,
Price,
#[iden(flatten)]
CreationInfo(Creation),
}
#[derive(Iden)]
enum Sale {
ProductId,
Discount,
EndDate,
#[iden(flatten)]
CreationInfo(Creation)
}
Currently I'm manually doing the following implementation which can easily be derived for expansion:
#[automatically_derived]
impl Iden for Product {
fn unquoted(&self, s: &mut dyn std::fmt::Write) {
write!(
s,
"{}",
match self {
Self::Name => "name",
Self::Price => "price",
Self::CreationInfo(c) => return c.unquoted(s),
}
)
.unwrap()
}
}
In my schema I have a postgres array of uuid's:
CREATE TABLE public.transaction_entry_types
(
id uuid NOT NULL,
data_type character varying(100) NOT NULL,
name character varying NOT NULL,
subtype_ids uuid[] NOT NULL,
comment_required boolean,
description text,
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL,
PRIMARY KEY (id)
);
When I try inserting the record with:
let (sql, values) = Query::insert()
.into_table(TransactionEntryTypes::Table)
.columns(vec![
TransactionEntryTypes::Id,
TransactionEntryTypes::DataType,
TransactionEntryTypes::Name,
TransactionEntryTypes::SubtypeIds,
TransactionEntryTypes::CommentRequired,
TransactionEntryTypes::Description,
TransactionEntryTypes::CreatedAt,
TransactionEntryTypes::UpdatedAt
])
.values(vec![
identifier.into(),
entry_type.data_type.into(),
entry_type.name.into(),
entry_type.subtype_ids.into(),
entry_type.comment_required.into(),
entry_type.description.into(),
chrono::offset::Utc::now().into(),
chrono::offset::Utc::now().into(),
])
.unwrap()
.returning_col(TransactionEntryTypes::Id)
.build(PostgresQueryBuilder);
Where entry_type
is a Vec<Uuid>
I get the following error:
entry_type.subtype_ids.into(),
^^^^ the trait `From<Vec<uuid::Uuid>>` is not implemented for `sea_query::Value`
This is in my cargo.toml entry:
sea-query = { version = "^0", features = ["postgres", "postgres-uuid", "postgres-chrono", "postgres-json"] }
Any suggestions on how to implement sea_query::Value
From
trait for this structure?
Thanks
Hello!
We currently use diesel for most of our queries, but we are having trouble with some advanced ones where we need conditional joins. We are thinking of using sea-query for that but we still want to use diesel for the rest for now. I am going to build the driver for diesel 2 (because diesel 1.x doesnt have boxed raw queries) and I was also thinking of building a macro to generate the Iden enums from the schema.
Do you think it would worth creating a PR to integrate that in the main repo?
Thanks
Ref:
Just wanna know why From<usize>
is not implemented for sea_query::Value
.
Since version 0.12.0
a query with both and_where
and and_where_option
specified is not valid anymore. The compilation fails with Cannot mix \`and_where`/`or_where` and `cond_where` in statements
.
TheCondition
struct, accepted by cond_where
, doesn't have a method to replace and_where_option
functionality.
I believe the "datetime" column type equivalence is "timestamp without time zone" in Postgres.
Ref: https://www.postgresql.org/docs/13/datatype-datetime.html
Code changes needed here
sea-query/src/backend/postgres/table.rs
Lines 60 to 63 in 744a8a8
examples/postgres_json
Hello,
Do you plan on supporting CockroachDB?
Currently, running the Postgres example result in the following error
Create table character: ()
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: Db, cause: Some(DbError { severity: "ERROR", parsed_severity: None, code: SqlState(E42601), message: "error in argument for $2: int8 requires 8 bytes for binary format", detail: None, hint: None, position: None, where_: None, schema: None, table: None, column: None, datatype: None, constraint: None, file: Some("encoding.go"), line: Some(520), routine: Some("DecodeDatum") }) }', examples/postgres/src/main.rs:48:67
stack backtrace:
0: rust_begin_unwind
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:493:5
1: core::panicking::panic_fmt
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/panicking.rs:92:14
2: core::result::unwrap_failed
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/result.rs:1355:5
3: core::result::Result<T,E>::unwrap
at /Users/remikalbe/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/src/rust/library/core/src/result.rs:1037:23
4: sea_query_postgres_example::main
at ./src/main.rs:48:15
5: core::ops::function::FnOnce::call_once
at /Users/remikalbe/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/src/rust/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Currently there is only a Value::DateTime
variant which does not work for TIMESTAMP WITH TIMEZONE
(TIMESTAMPTZ
) column types.
Can we add a trait for all query types to factor the interface/implementation of build
and to_string
? That way, if we want a function that takes any query and a builder and builds the query, we can just be generic over the trait, not each specific query type.
WDYT?
Hi,
I'm using the squlite query builder here, with the chrono::NaiveDateTime, and it seems that it breaks on inserts because the timestamp is not quoted. It sounds like an easy enough fix :) To reproduce:
#[derive(Iden)]
pub enum Users {
Table,
CreationDate,
}
let sql_pool = sqlx::sqlite::SqlitePoolOptions::new().connect("sqlite::memory:").await.unwrap();
sqlx::query(
&Table::create()
.table(Users::Table)
.create_if_not_exists()
.col(ColumnDef::new(Users::CreationDate).date_time().not_null())
.to_string(SqliteQueryBuilder),
)
.execute(pool)
.await?;
// This works
sqlx::query(r#"INSERT INTO users (creation_date) VALUES (""1970-01-01 00:00:00")"#).execute(&sql_pool).await.unwrap();
// This doesn't;
let query = Query::insert()
.into_table(Users::Table)
.columns(vec![Users::CreationDate])
.values_panic(vec![chrono::NaiveDateTime::from_timestamp(0, 0).into()])
.to_string(SqliteQueryBuilder);
sqlx::query(&query).execute(&sql_pool).await.unwrap();
The mapping for ColumnType::Binary
in PostgresQueryBuilder.prepare_column_type()
maps to a binary
type. This doesn't appear to map to any documented type in postgres (see Table 8.1 Data Types). It's unclear if this is a mistake or just an undocumented type alias (pg mentions there are undocumented type aliases). Could this be updated to use a documented type? Since this takes an optional length
, maybe this is meant to be bit
? Possibly bytea
, but that doesn't take a length
.
Support the following:
LEFT JOIN table AS alias
This is already possible for fields with expr_alias
and for the FROM clause via from_alias
, but not for joins (as far as I can tell, would love to be proven wrong). This is important as soon as you want to JOIN the same table twice.
Note that this might require an Expr::alias(alias, column)
as well, but not necessarily since you can just use Expr::tbl
with an Alias
everywhere.
Currently the GenericBuilder train returns Box<dyn QueryBuilder>
for query_builder(&self)
and so on for other functions. We could use an associated type (which would be Self
for this case) and then return the object without boxing it.
Something like this:
pub trait GenericBuilder {
type Builder: QueryBuilder + TableBuilder + IndexBuilder + ForeignKeyBuilder;
fn query_builder(&self) -> Self::Builder;
fn table_builder(&self) -> Self::Builder;
fn index_builder(&self) -> Self::Builder;
fn foreign_key_builder(&self) -> Self::Builder;
}
Follow up on #2, where CREATE TYPE
and DROP TYPE
for ENUMs was already implemented in
https://github.com/SeaQL/sea-query/blob/master/src/extension/postgres/types.rs
We now want to implement ALTER TYPE ADD VALUE
and ALTER TYPE RENAME VALUE
etc to complete the use case.
And prepare testcases for them.
Hi, I'm a heavy TypeORM user, so it's great to see something similar in Rust! Here I have a quick question, while in Query::Select()
will you support dynamic '.column()' and '.from()' in the future?
e.g.:
Query::select()
// Instead of using an `enum`, is it possible to use a `String`?
.column(Char::Character)
.from(Char::Table)
.conditions(
...
);
Looking at how you have to manually implement Iden at the moment, it seems like a natural fit for a minimal derive macro.
Something like this:
#[derive(Iden)]
#[iden(rename_all = "snake_case")]
pub enum Character {
#[iden("character")]
Table,
Id,
Character,
FontSize,
SizeW,
SizeH,
FontId,
}
Going one step further, you could theoretically derive it on a struct as well, auto-generating the enum:
#[derive(IdenEnum, PostgresMapper)]
#[iden(table = "character")]
pub struct Character {
id: i64,
character: String,
font_size: i64,
size_w: i64,
size_h: i64,
font_id: Option<i64>,
}
// This would generate the enum above with an Iden implementation based on the struct's fields.
// As for the name of the enum, that could be chosen by another iden attribute, and default to
// something like {struct name}Def, ie CharacterDef.
// The implementation would transform the field names in the enum to PascalCase by default.
// And then you can directly use the struct with tokio-pg-mapper (Character::from_row)
This is mostly playing with ideas, but I'd be interested in contributing something like that if there is interest. Possibly something like that is already planned for SeaORM, but it would be nice to have it customizable and independent of the bigger ORM project.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.