Git Product home page Git Product logo

granite's People

Contributors

a-alhusaini avatar adam-stomski avatar akadusei avatar bcardiff avatar blacksmoke16 avatar c910335 avatar crimson-knight avatar dare892 avatar drujensen avatar eliasjpr avatar elorest avatar enricomonese avatar faustinoaq avatar jphaward avatar kalinon avatar kediboregi avatar maiha avatar marcoroth avatar migeorge avatar mrsorcus avatar msa7 avatar nathanj avatar noahlh avatar pan avatar repomaa avatar richardhp avatar robacarp avatar stufro avatar thellior avatar veelenga avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

granite's Issues

Support Amber Environment Variables

Currently Granite supports configuring the database_url using 2 methods:

  • An environment variable DATABASE_URL
  • A config/database.yml file

Add a third option that has a priority in between these 2 that will look for an environment setting in Amber.

Allow logging

Currently Amber shows request and some response information on logs.

 | DELETE  "/languages/1"  | 14.55ms
Params: _method=delete&id=1
200  | GET  "/languages"  | 1.96ms
Params: 
 | DELETE  "/languages/2"  | 27.49ms
Params: _method=delete&id=2
200  | GET  "/languages"  | 1.66ms
Params: 
 | DELETE  "/languages/3"  | 24.58ms
Params: _method=delete&id=3
200  | GET  "/languages"  | 2.28ms
Params: 
 | DELETE  "/languages/4"  | 24.58ms
Params: _method=delete&id=4
200  | GET  "/languages"  | 1.68ms
Params: 
 | DELETE  "/languages/5"  | 27.02ms
Params: _method=delete&id=5
 | DELETE  "/languages/5"  | 397.0µs
Params: _method=delete&id=5
200  | GET  "/languages"  | 1.45ms

Would be useful to show query data on logs too, just like Rails and Phoenix do:

Article Create (0.000443)   INSERT INTO "articles" ("updated_at", "title", "body", "published",
 "created_at") VALUES('2008-09-08 14:52:54', 'Debugging Rails',
 'I''m learning how to print in logs!!!', 'f', '2008-09-08 14:52:54')

@drujensen WDYT?

amber db create for TiDB

Hi , i am new in amber , i try to create migration to use amber with TiDB ,
TiDB is use same protocol with mysql , this is officially staetment from the TiDB

https://github.com/pingcap/tidb/blob/master/docs/QUICKSTART.md

but when i try to connect amber with TiDB using mysql configuration (just different port) same with example at the Docs , it like cannot prepare the steatment for creating DB
maybe there is issue with mysql ?

can't name a column created_at in SQLite

this is related to #29 but can be handled independently

As long as we don't have support for timestamps in SQLite it shouldn't force created_at to be a Time format.

On a related note, if i don't tell granite to use timestamps, it seems silly that it should try and enforce the format of timestamp columns. Yes, it's a bit silly to have "bob" in my created_at column, but as I didn't tell granite to do timestamps it shouldn't be enforcing it's opinions on what i can have in that column.

Yes, rails goes ahead and does it, but there's a difference. In rails, I don't have to specify that i want it to use timestamps, which the docs seem to suggest i DO need to do, although this bug seems to be proving the docs wrong.... either that or something is whacky.

In the meantime, i'll be attempting to monkeypatch the created_at functionality out of existence. :/

Improving relationships between models

When using Granite-Orm I found myself missing a lot of functionalities in terms of relationships between models, and after starting extending it in my own chamber I wanted to share some thoughts on improvement, my approach so far and possibilities to contribute this to Granite. I'm still a beginner in Crystal so I'm sorry if some of my code is written poorly!

Distinguishing between serial and primary

By distinquishing between serial (an autoincrementing field in DB) from primary(a primary key, not incrementing), it opens up better possibilities to start mapping many-to-many relationships through joining tables, in the same manner as before, and could be added as:
macro serial(col) {% SERIAL[col.var] = col.type %} end

Identiying the most important database relationships to cover

For Granite to mirror the different kind of relationships found in databases, I created the following cases and cover them one by one below:

  1. many
  2. many_through
  3. many_tuples_through
  4. one
  5. one_tuple

I've been working on a website for creating recipes with object references for ingredients, so I will use my recipe structure to exemplify the different cases.

1. many

This is what is called "has_many" in Granite today, a one-to-many relationship. Basically, a recipe has many steps, and each step has a primary key which links it to the specific recipe.

class Recipe < Model
  serial recipe_id : Int32
  field title : String
  include_timestamps
  has_many steps : Step
end

class Step < Model
  primary recipe_id : Int32
  primary step_order : Int32
  field description : String
  include_timestamps
end

The steps are lazily loaded within the Model, such as:

{% for name, type in MANY %}
      property {{name.id}} : Array({{type.id}}) { {{type.id}}.where({ 
      {% for name, type in SERIAL %}
            :{{name.id}} => @{{name.id}}, {% end %}
      {% for name, type in PRIMARY %} ::{{name.id}} => @{{name.id}}, {% end %} }) }
{% end %}

2. many_through

This is the standard many-to-many case, where two tables are joined through a joining table. Besides poor naming of my functions, here is a sketch how the adapter part of the function would behave:

def many_through(parent, child, joiner)
    statement = String.build do |stmt|
      stmt << "SELECT "
      stmt << child.property_names.map { |key| 
          "#{child.table}.#{key}" 
        }.join(", ") 
      stmt << " FROM "
      stmt << joiner.table
        stmt << " INNER JOIN " + child.table + " ON "
        stmt << child.keys.map { |key| 
          "#{child.table}.#{key} = #{joiner.table}.#{key}"
        }.join(" ")
      stmt << " WHERE "
      stmt << parent.class.keys.map_with_index { |key, index| "#{joiner.table}.#{key} = $#{index + 1}"}.join(" AND ")
    end
    open do |db|
      db.query statement, parent.keys do |rs|
        yield rs
      end
    end 
  end

This allows the following syntax:

class Ingredient < Model
  serial ingredient_id : Int32
  field name : String
  include_timestamps
  has_many pesticides : Pesticide, {:through => IngredientPesticide}
end

In the same manner, these are lazily loaded with the Ingredient through the Model.

3. many_tuples_through

This covers the case of more complex joining through multiple tables, also including the possibility to add fields (on top of primary keys) to the joining table. For instance, a recipe consists of ingredients. In the database, we have tables for ingredients and units which are pointed at by the recipe. The actual ingredient is a joining table consisting of (bold indicates PK):
recipe_id, ingredient_id, unit_id, amount
These are then mapped through a joining query, such as:

  def many_tuples_through(parent, children, joiner)
    statement = String.build do |stmt|
      stmt << "SELECT "
      stmt << children.map { |name, child| 
        if child.responds_to?(:property_names)
          child.property_names.map { |prop| "#{child.table}.#{prop}" }.join(", ")
        else
          "#{joiner.table}.#{name}"
        end 
      }.join(", ")
      stmt << " FROM "
      stmt << joiner.table 
      children.map { |name, child|
        if child.responds_to?(:table)
          stmt << " INNER JOIN " + child.table + " ON "
          stmt << child.keys.map { |prop| 
            "#{child.table}.#{prop} = #{joiner.table}.#{prop}"
          }.join(" ")
        end
      }.join(" ")
      stmt << " WHERE "
      stmt << parent.class.keys.map_with_index { |key, index| "#{joiner.table}.#{key} = $#{index + 1}" }.join(" AND ")
    end 
    open do |db|
      db.query statement, parent.keys do |rs|
        yield rs 
      end
    end
  end

This then generates the following SQL in our case:

SELECT recipe_ingredient_unit.amount, ingredients.ingredient_id, ingredients.name, ingredients.created_at, ingredients.updated_at, units.unit_id, units.name, units.short_name, units.unit_type, units.unit_amount, units.is_default, units.created_at, units.updated_at 
FROM recipe_ingredient_unit 
INNER JOIN ingredients ON ingredients.ingredient_id = recipe_ingredient_unit.ingredient_id 
INNER JOIN units ON units.unit_id = recipe_ingredient_unit.unit_id 
WHERE recipe_ingredient_unit.recipe_id = $1

This means we can have these 4 tables: recipes, ingredients, units and recipe_ingredient_unit. My model then allows for the following syntax:

class Recipe < Model
  serial recipe_id : Int32
  field title : String
  include_timestamps
  has_many steps : Step
  has_many_tuples ingredients : {amount: Int32, ingredient: Ingredient, unit: Unit}, {:through => RecipeIngredientUnit}
end

This declaration instansiates an array of namedtuples consisting the fields in the joining table (in this case amount), and the objects in the other tables, also lazily loaded.

4. one

Same as belongs_to today.

5. one_tuple

A combination of many_tuples_through and one, e.g

class Nutrition
has_one recommended_daily_dosage : {rdi: RDI, unit: Unit}
end

I thought this was way too big to be considered a pull request so I added it as an issue and perhaps find someone interested in helping me improve upon it so it can be viable for Granite! :)
Also, thanks for the great work on Granite, I hope I can be of some help contributing!

Model to schema naming conventions

Coming from ActiveRecord, the Model name to table name convention is great to see. I did notice that there is one difference, and I wanted to bring it up as it was the only difference I saw between Granite's and ActiveRecord's. It can be seen in this error:

icr(0.23.1) > SongThread.all
relation "songthreads" does not exist (PQ::PQError)

The Rails Guides outline this convention for ActiveRecord:

Database Table - Plural with underscores separating words (e.g., book_clubs).
Model Class - Singular with the first letter of each word capitalized (e.g., BookClub).

In this case, you'd expect the schema relation to be called song_threads.

Here's my SongThread class for completeness to the example:

require "granite_orm/adapter/pg"

class SongThread < Granite::ORM::Base
  adapter pg

  field bpm : Int32
  field title : String
  field primary_account : Int32
  field secondary_account : Int32

  timestamps

  def primary_account
    Account.find @primary_account
  end

  def secondary_account
    Account.find @secondary_account
  end
end

Numeric casts and PostgreSQL

crystal-pg returns an Int32 or Int64 depending on the actual value size (since PostgreSQL has arbitrary integer precision). Crystal doesn't support upcasting integers, so Granite has to use conversion to convert between different integer types. Normalizing everything to Int64 probably makes the most sense.

Validator code doesn't seem to work (or isn't documented)

I'm not sure I understand how validators work. The readme is a mute on the subject, so I set out to do some digging and ran up against a hard wall quickly.

On a fresh project: amber new test -p sqlite
with a scaffolded model: amber generate scaffold user name:string age:integer

I add code to the user model to get this:

class User < Granite::ORM::Base
  adapter sqlite
  table_name users

  validate :name, "Name cannot be blank" do
    false
  end

  field name : String
  field age : Int64
end

Compiling the code yields this error:

> crystal run src/test.cr 
Error in src/test.cr:1: while requiring "../config/*"

require "../config/*"
^

in config/application.cr:3: while requiring "../src/models/**"

require "../src/models/**"
^

in src/models/user.cr:7: expanding macro

  validate :name, "Name cannot be blank" do
  ^~~~~~~~

in src/models/user.cr:7: expanding macro

  validate :name, "Name cannot be blank" do
  ^

in macro 'validate' /Users/robert/Documents/repositories/test/lib/granite_orm/src/granite_orm/validators.cr:12, line 1:

>  1.     @@validators << {field: :base, message: :name, block: "Name cannot be blank"}
   2.   

no overload matches 'Array(NamedTuple(field: Symbol, message: String, block: Proc(User, Bool)))#<<' with type NamedTuple(field: Symbol, message: Symbol, block: String)
Overloads are:
 - Array(T)#<<(value : T)

Reading the error, the compiler is choosing the wrong macro, which I assume is because the variables to the macro methods aren't typed:

  macro validate(message, block)
    @@validators << {field: :base, message: {{message}}, block: {{block}}}
  end

  macro validate(field, message, block)
    @@validators << {field: {{field}}, message: {{message}}, block: {{block}}}
  end

Digging in a little, I think perhaps this code was written for an older version of Crystal?

Provide JOINS and INCLUDES querying methods and some thoughts

OPENING FOR DISCUSSION

After reading the QUERYING section of the README https://github.com/amberframework/granite-orm#queries it seems more natural and explicit to have an includes and joins methods since current approach is implicit on how to use joins.

One of the drawbacks that I see with the current implementation is that the JOIN would have to live with the query was defined making the query less reusable.

Examples:

# Simple Joins
Post.joins(:users).all('...')

# Custom Joins
Post.joins("JOINS users u ON u.id = posts.id").all('...')

# Nested Joins
Post.joins(users: :addresses).all('...')

# Scoped methods 

def self.with_users
  joins(:users)
end

def self.with_users_addresses
  joins(users: :addresses)
end

# Usage
Post.with_users.all('where age > 20')
Post.with_users.all('where age > 40 AND age < 50')

Benefits

  • Easily express simple and nested joins.
  • Explicitly defined joins
  • Allow scoped methods
  • More reusable queries
  • Complement the association methods to build through associations.

Other Thoughts/Questions

What about adding a where to allow even more reusable queries for scopes? Or this is currently possible by chaining all?

def self.with_users
  joins(:users)
end

def self.above_twenty
  where('age > 20') # or where(age: [40..50])
end

Conclusion

I know we would like to keep the DSL simple, but we are going to need to implement more methods where, limit, select, merge, to simplified and make queries methods more reusable.

Integer fields ?

Hi,

I'm trying to setup INTEGER field in sql_mapping, but look like buggy.

class Board < Kemalyst::Model
  adapter sqlite

  sql_mapping({
    user_id: ["INTEGER", Int32]
  })

end

b = Board.new
b.user_id = 42
b.save

When saving, I'm getting :

Error in ./app/models/board.cr:14: instantiating 'Board#save()'

b.save
  ^~~~


in ./libs/kemalyst-model/kemalyst-model.cr:173: instantiating 'Kemalyst::Adapter::Sqlite#update(String, Hash(String, String), (Int32 | Int64), Hash(String, Int32 | Time | Nil))'

          db.update(@@table_name, self.class.fields, value, params)
             ^~~~~~

in ./libs/kemalyst-model/adapter/sqlite.cr:110: no overload matches 'Hash(String, Int32 | Time | Nil)#[]=' with types String, String
Overloads are:
 - Hash(K, V)#[]=(key : K, value : V)

      params["id"] = "#{id}"

I looked into spec and I feel surprised that there isn't any integer field covered :octocat:

Modulize Granite::ORM

I think it is time to modulize Granite::ORM because src/granite_orm.cr is too complicated to add new features.
The following are my suggestions.

Renaming Granite::ORM to Granite::ORM::Base

If we want to split Granite::ORM into modules, those modules should be inside a module which is very likely to be named as Granite::ORM. Thus, renaming the original class Granite::ORM to Granite::ORM::Base is necessary although it is a breaking change.

The Plan for Modulizing

  • src
    • adapter - unchanged
    • granite_orm
      • associations.cr - things mentioned in this issue #30
      • base.cr - require all the other files in src/granite_orm, declare class Granite::ORM::Base, include(extend) the modules and constructors
      • callbacks.cr - things in my previous pr #42
      • fields.cr - macro field, timestamps, process_fields (but this is still too large and need partitioning), and method set_attributes
      • querying.cr - current public class methods
      • table.cr - macro adapter, table_name, primary
      • transactions.cr - method save, destroy
      • version.cr - just version
    • granite_orm.cr - require "./granite_orm/base"

There may be some difficulties and this plan still needs improvements, so let's start discussing.

Keep database connection open

The current implementation opens and closes the database connection on each request. This makes the reliable and self healing if the network or database goes down for a short period of time but it slows the web apply down significantly

  • Change the logic to keep the database connection open.
  • Add logic to handle failures and retry to recreate the connection.
  • Add specs to verify.

Add

Support for Boolean

The new db drivers to not support Boolean in the DB::Any Union. Look at how to support other types not included in the driver.

Support id columns of type Int32 for legacy tables

It would be neat to be able to override the type of the id column when defining a model class.

Use case: I have an existing Rails app using MySQL as database backend, and want to implement a new microservice in Crystal operating on the same database. The id column in this database is of type INT(11), which maps to an Int32 with the mysql adapter.

This currently precludes the use of kemalyst-model as a mapping layer due to its hardcoded type of Int64 type id columns.

Unclear how to do password hashing

I'm trying to create a user model which I can do something like User.new({ email: "[email protected]", password: "password" }) and have it transform the "password" input field to a "hash" field in the db that has the hashed password. It's unclear to me exactly how to achieve this with granite.

It seems User.new(...) and user.set_attributes(...) only accepts values explicitly named with field, so I'd have to use the same hash name as what is in the database. I looked at using the before_save callback, but I'm not sure how to tell if the value is new or already hashed.

It'd also be nice to have some way for fields like this to only be accessible from within the model, that way you'd have to explicitly write functions to interact with them. Like the hash field should really only ever be readable in a login method.

Any ideas?

Undefined constant Text

I'm new to Crystal and Granite but when trying the example code, I get:

expanding macro
in macro 'property' expanded macro: macro_4515426128:567, line 4:

   1.
   2.
   3.
>  4.             @message : Union(Text | Nil)
   5.
   6.             def message : Union(Text | Nil)
   7.               @message
   8.             end
   9.
  10.             def message=(@message : Union(Text | Nil))
  11.             end
  12.
  13.
  14.
  15.

undefined constant Text

I simply created a model with

field message : Text

Support not nil and default values in mapping

Look at supporting not nilable types in the mapping. Also provide default values.

This should modify the generation of the property field : Type = default macro generator with parameters provided in the mapping.

How to create tables

I am not sure about how to create tables using granite-orm and was not able to find the documentation for it.
my code below is producing error: Save Exception: no such table: abc

class Abc < Granite::ORM::Base
  adapter sqlite
  table_name abc
  field name : String
end 

a = Abc.new 
a.name = "test"
a.save

Error Message:
Save Exception: no such table: abc

undefined constant SQLite3::Database

I'm very new to Crystal, so apologies if this is silly question.

I'm getting the below error

Error in ./libs/kemalyst-model/adapter/sqlite.cr:6: undefined constant SQLite3::Database

  @pool : ConnectionPool(SQLite3::Database)
                         ^~~~~~~~~~~~~~~~~

shards.yml

dependencies:
  kemal:
    github: sdogruyol/kemal
    branch: master
  slang:
    github: jeromegn/slang
  kemalyst-model:
    github: drujensen/kemalyst-model
    branch: master
  sqlite3:
    github: crystal-lang/crystal-sqlite3
    branch: master

database.yml

sqlite:
  database: config/test.db

user.cr

require "kemalyst-model/adapter/sqlite"

class User < Kemalyst::Model
  adapter sqlite

  # table name is set to post_comments and timestamps are disabled.
  sql_mapping({ 
    first_name: ["VARCHAR(255)", String], 
    surname: ["VARCHAR(255)", String], 
    address: ["TEXT", String],
    postcode: ["VARCHAR(255)", String]
  })

end

Many thanks.

Catch type-casting errors in set_attributes to append to errors

Currently in set_attributes, casting errors simply bubble up making errors potentially ambiguous when creating records with a param hash.

class Post < Granite::ORM::Base
  field likes : Int32
end

p = Post.new({ "likes" => "" })

Results in:

Invalid Int32:

These errors are not stored in p.errors as this happens before #valid? is called. This means that Post hashes provided to create a new Post must be validated before being sent to Post.new.

Perhaps, cast_to_field could rescue type casting errors and append the type error message to p.errors including the field name. This would allow some core validation to remain in Granite rather than also having it in each model or in controllers.

granite_orm vs granite-orm

I noticed that we have underscore for the name and then dash for the github repository. I think it would be better to have some consistency here to make it easier to configure.

Fields of type Time are not serializable as JSON under Crystal 0.24.1

As reported by a kind passer-by in IRC, it seems this line no longer works:

            json.field %field, %value.try(&.to_s(%F %X))

from fields.cr

I haven't yet upgraded my machine to crystal 0.24.1, so I haven't verified this bug personally but it seems like the line should have had " around the format string in the first place and was only working because of a fallthrough in the preprocessor that doesn't exist anymore.

Provide field! for property!

Most of the times, we are very sure that a field is not nil (by validator, default value or SQL not null constraint), so it is annoying to check nil every time and property! works well for this situation.

add support for 1 to many relationships

Create two new macros: belongs_to and has_many

require "./post"

class Comment < Kemalyst::Model
  belongs_to post
  ...
end

This will create a mapping to the post_id : Int64 field and generate a method that will query the parent and return it.

def post
  Post.find post_id
end
require "./comment"

class Post < Kemalyst::Model
  has_many comments
  ...
end

This will create a method called comments that will query the children records.

def comments
  Comments.all("JOIN posts ON posts.id = comments.post_id WHERE posts.id = ?", id)
end

Options to override the field names and types should be considered for legacy database mappings.

One of the difficulties is requiring the other model. Macros do not allow you to inject require into the class.

Another issue is the N+1 queries for comments. I think we can find ways of optimizing this later.

Simple localized fields implementation

So I have:

class Article < Granite::ORM::Base
....
  field title_en : String
  field title_it : String

  # if it was ruby I could implement it like this
  def title
    self.send("title#{I18n.locale}")
  end
end

Can it be implemented in crystal like macros maybe? If "yes", can someone help me with that?

Logic about how to instruct granite to connect to a database is confusing/misleading/out of par with amber core

I've run into this issue myself but a kind passer-by in IRC was snagged on it too and so I think it needs some attention. This code from adapter/base.cr is leading to confusion:

  def initialize(adapter : String)
    if url = ENV["DATABASE_URL"]? || Granite::ORM.settings.database_url || replace_env_vars(settings(adapter)["database"].to_s)
      @database = DB.open(url)
    else
      raise "database url needs to be set in the config/database.yml or DATABASE_URL environment variable"
    end
  end


  DATABASE_YML = "config/database.yml"


  def settings(adapter : String)
    if File.exists?(DATABASE_YML) &&
       (yaml = YAML.parse(File.read DATABASE_YML)) &&
       (settings = yaml[adapter])
      settings
    else
      return {"database": ""}
    end
  end

I don't personally run my development environment in docker and the logic here requires me to have a config/database.yml file, presumably because Granite::ORM.settings.database_url isn't getting set properly?

I believe the code relating to config/database.yml is legacy and can be removed.

For searchability, the error message this produces is:

no driver was registered for the schema "", did you maybe forget to require the database driver? (ArgumentError)

The quick solution to this problem is to put your database credentials in a file at config/database.yml and it should look something like this:

pg:
  database: postgres://username:password@localhost:5432/database_name

DATABASE_URL ENV should overwrite database.yml if it exists.

In Rails database_url overwrites database.yml if it exists.
If non-duplicate information is provided you will get all unique values, environment variable still takes precedence in cases of any conflicts.
This is really useful because it allows you to configure default values in database.yml but override them for docker or production.

If you agree with this I'll send a pull request.

Creating a model with no "real" fields causes amber to fail

When a model is created with no "real" fields (i.e. when it has only id and timestamps) then some macro doesn't expand to vaild code.

Steps to reproduce:

amber new myapp
crystal src/myapp.cr <-- works

amber g model mymodel
crystal src/myapp.cr <-- fails with:

Error in macro 'inherited' /tmp/myapp/lib/granite_orm/src/granite_orm/base.cr:22, line 1:

>  1.     macro finished
   2.       __process_table
   3.       __process_fields
   4.       __process_querying
   5.       __process_transactions
   6.     end
   7.   

expanding macro
in macro 'finished' expanded macro: inherited:1, line 2:

   1.       __process_table
>  2.       __process_fields
   3.       __process_querying
   4.       __process_transactions
   5.     

macro didn't expand to a valid program, it expanded to:

================================================================================
--------------------------------------------------------------------------------
   1.     # Create the properties
   2.     
   3.     
   4.       property created_at : Time?
   5.       property updated_at : Time?
   6.     
   7. 
   8.     # keep a hash of the fields to be used for mapping
   9.     def self.fields(fields = [] of String)
  10.       
  11.       
  12.         fields << "created_at"
  13.         fields << "updated_at"
  14.       
  15.       return fields
  16.     end
  17. 
  18.     # keep a hash of the params that will be passed to the adapter.
  19.     def params
  20.       parsed_params = [] of DB::Any
  21.       
  22.       
  23.         parsed_params << created_at.not_nil!.to_s("%F %X")
  24.         parsed_params << updated_at.not_nil!.to_s("%F %X")
  25.       
  26.       return parsed_params
  27.     end
  28. 
  29.     def to_h
  30.       fields = {} of String => DB::Any
  31. 
  32.       fields["id"] = id
  33. 
  34.       
  35.       
  36.         fields["created_at"] = created_at.try(&.to_s("%F %X"))
  37.         fields["updated_at"] = updated_at.try(&.to_s("%F %X"))
  38.       
  39. 
  40.       return fields
  41.     end
  42. 
  43.     def to_json(json : JSON::Builder)
  44.       json.object do
  45.         json.field "id", id
  46. 
  47.         
  48. 
  49.         
  50.           json.field "created_at", created_at.try(&.to_s("%F %X"))
  51.           json.field "updated_at", updated_at.try(&.to_s("%F %X"))
  52.         
  53.       end
  54.     end
  55. 
  56.     def set_attributes(args : Hash(String | Symbol, Type))
  57.       args.each do |k, v|
  58.         cast_to_field(k, v.as(Type))
  59.       end
  60.     end
  61. 
  62.     def set_attributes(**args)
  63.       set_attributes(args.to_h)
  64.     end
  65. 
  66.     # Casts params and sets fields
  67.     private def cast_to_field(name, value : Type)
  68.       case name.to_s
  69.         
  70.       end
  71.     end
  72.   
--------------------------------------------------------------------------------
Syntax error in expanded macro: __process_fields:70: unexpected token: end (expecting when or else)

      end
      ^

Weird issue when I try to use kemalyst model without kemalyst.

I copied a model that worked perfectly within a kemalyst project over to a simpler kemal project but I get a confusing error when I try to compile. I've included kemalyst-model and crystal-mysql. But for some reason I get this error.

 138.     def self.all(clause = "", params = [] of DB::Any)
 139.       rows = [] of self
>140.       @@adapter.select(@@table_name, fields([@@primary_name]), clause, params) do |results|
 141.         results.each do
 142.           rows << self.from_sql(results)
 143.         end
 144.       end
 145.       return rows
 146.     end

instantiating 'Kemalyst::Adapter::Mysql#select(String, Array(String), String, Array(Bool | Float32 | Float64 | Int32 | Int64 | Slice(UInt8) | String | Time | Nil))'
in lib/kemalyst-model/src/adapter/mysql.cr:61: instantiating 'open()'

Is there some other dependency I'm unaware of?

Classes must be in toplevel scope.

The macros break when trying to put the model inside a module, eg

  module Foo
    class Wat < Kemalyst::Model
      adapter pg
      sql_mapping({unicorns: String})
    end
  end

TEXT fields are set to nil

With the MySQL adapter, the type of TEXT field is Slice(UInt8) and can't be converted to String using as_a? method.

this test fail :

it "should get TEXT fields" do
      post = Post.new
      post.name = "Test Post"
      post.body = "Post Body"
      post.save

      posts = Post.all
      p1 = posts.first
      p1.body.should eq "Post Body" # here p1.body == nil
end

Add support for to_json to models

When developing an API only application with Amber the need for JSON model serialization becomes apparent rather quickly.

It seems logical to add JSON mapping support to Granite models so I wanted to open this issue to start a discussion.

I'm willing to put some time towards a PR on this, but I wanted to get input from existing members of this project to flesh out an acceptable implementation before putting the effort into actually developing the solution.

Questions:

  1. Should Granite::ORM implement a to_json method on models similar to the JSON.mapping macro?
  2. Should Granite::ORM implement a JSON -> Model interface similar to the JSON.mapping macro?
  3. Is there a good way to inherit some of this functionality (from Crystal core) without re-implementing it?

Relations need ability to manually overriding the class, table, foreign_key

Somewhat pursuant to #76, I'd like to add overrides for the has_many and belongs_to macros which allow a syntax like this:

class IPAddress
  belongs_to :host
end
class Host
  has_many :ip_addresses, class: IPAddress
end

The inability of any sort of inflector to correctly singularize "ip_addresses" means it correctly tries to look for a table called "ip_addresses" but tries to build them into a class called IPAddresse. One solution to this is to implement a proper inflector, but there are always going to be failure cases where "proper english" and "what people actually want to do" don't match up (or they're programming in one of the other myriad of non-english languages).

Providing the ability to override the generated class name would help, provide immediate benefit where pluralization by adding and removing "s" fails, and make model code more readable.

Providing the ability to override the foreign_key and type would also address #92 and #93 as well and would be similar work.

This prototype code is mostly working but will need more refinement before it can be viable. @amberframework/granite-orm-contributors thoughts?

  macro has_many(
    relation_plural, class relation_class_name = nil, table_name relation_table_name = nil,
    through join_relation = nil, through_table_name join_table_name = nil)
    {%
      relation_plural = relation_plural.id
      relation_singular = relation_plural[0...-1]

      name_space = @type.name.gsub(/::/, "_").underscore.id
      self_plural = SETTINGS[:table_name] || name_space + "s"
      self_singular = self_plural[0...-1]

      if relation_class_name == nil
        relation_class_name = relation_singular.camelcase
      end

      if relation_table_name == nil
        relation_table_name = relation_class_name.id.underscore + "s"
      end

      if join_table_name == nil
        join_table_name = join_relation
      end

      foreign_key = "#{relation_table_name.id}.#{self_singular}_id"
    %}

    def {{ relation_plural.id }} : Array({{ relation_class_name }})
      return [] of {{ relation_class_name }} unless id

      {% if join_relation %}
        query = <<-SQL
          JOIN {{ join_table_name.id }} ON {{ join_table_name.id }}.{{ relation_singular }}_id = {{ relation_plural }}.id
          WHERE {{ join_table_name.id }}.{{ name_space.id }}_id = ?
        SQL
        {{ relation_class_name }}.all(query, [id])
      {% else %}
        query = "WHERE {{ foreign_key.id }} = ?"
        {{ relation_class_name }}.all(query, [id])
      {% end %}
    end

Save should call validate and fail before saving

I am very much new to Crystal and Amber. trying to create a basic blog application in amber. I like to add presence validation in title and body of post model. how can we achieve this?

require "granite_orm/adapter/pg"

class Post < Granite::ORM::Base
  adapter pg

  field name : String
  field body : String
  field draft : Bool
  timestamps
end

Support for Time

Look into support for other types that the DB::Any Union is missing.

error on running db/migrate.cr- Can't DROP 'total'; check that column/key exists (Exception)

This field was never in my model to begin with.

Looks like it's an issue with #prune.
Even if I add a field in mysql called 'total', the error still occurs (and the new 'total' field is also dropped).

stacktrace:

[4766327] *CallStack::unwind:Array(Pointer(Void)) +87
[4766218] *CallStack#initialize:Array(Pointer(Void)) +10
[4766170] *CallStack::new:CallStack +42
[4688648] *raise<Exception>:NoReturn +24
[4688622] ???
[5091984] *MySql::Connection#handle_err_packet<MySql::ReadPacket>:NoReturn +80
[5498273] *MySql::Statement#perform_exec_or_query<Slice(Float32 | Float64 | Int32 | Int64 | Slice(UInt8) | String | Nil)>:(DB::ExecResult | MySql::ResultSet) +5857
[5492159] *MySql::Statement#perform_exec<Slice(Float32 | Float64 | Int32 | Int64 | Slice(UInt8) | String | Nil)>:DB::ExecResult +63
[5502771] *DB::Statement+ +51
[5502684] *DB::Statement+ +60
[5484443] *DB::PoolStatement#exec:DB::ExecResult +203
[5068117] *DB::Database +37
[5048973] *Kemalyst::Adapter::Mysql#prune<String, Hash(String, String)>:Array(String) +765
[5054277] *Post +53
[4646396] ???
[4707641] main +41
[140029813202757] __libc_start_main +245
[4641177] ???
[0] ???

Sql injection

Is there a library to sanitize user input? Or does kemalyst-model handle that already?

Timestamps should be in UTC, or at least consistent

Today I noticed this discrepancy with the way times are written to and read from the database:

def test
  result = Result.new()
  result.save
  insert_time = Time.now

  return unless timestamp = result.created_at
  result = Result.find(result.id)
  return unless result
  return unless timestamp2 = result.created_at
  puts timestamp - insert_time
  puts timestamp2 - insert_time
end
test
-00:00:00.0027060
-07:00:00.7834670

Driver: postgres, but I haven't tested others. I believe the time is going in with a local offset, and being read out assumed in UTC. I'm in MST(-7), so there's a -7 discrepancy.

The fix might be as easy as a #to_utc in a few places here, but I'm not sure.

One to One Relationship

There doesn't seem to be a way currently to have One to One relationship. For example, in Rails, there is:
Class A
has_a :class_b
end

Class B
belongs_to :class_a
end

What is the suggested workaround for now (if there is any)? Just use One to Many relationships to fake a One to One?

Not null fields

Having to handle nil on every field access is a PITA. Support marking fields as not null, or maybe even make that the default and support marking them as nullable. Make the primary key not null by default

proposal: decouple Model#find from id

or I guess I should say decouple #select_one from id as well as #find.

Currently, the only way to #find a post is with its ID.
However, for vanity URLs, it's nice to search by a slug.

Example implementation:

Post.cr

sql_mapping({
    name: ["VARCHAR(255)", String],
    slug: ["VARCHAR(255)", String],
    body: ["TEXT", String],
})

# example to use either post.slug or post.id
def self.find_by_slug_or_id(input)
  Post.find(input, "slug") || Post.find(input)
end

post_controller.cr

class Show < Kemalyst::Controller
  def call(context)
    authorized = context.session.has_key?("authorized")

    id = context.params["id"]
    if post = Post.find_by_slug_or_id(id)
      render "post/show.ecr", "posts.ecr"
    else
      context.flash["Error"] = "Post with id:#{id} could not be found"
      redirect "/posts"
    end
  end
end

kemalyst-model.cr (field parameter is added)
This way, people can still use #find like normal.

def self.find(id, field = "id")
  row = nil
  @@adapter.select_one(@@table_name, fields({"id" => "BIGINT"}), field, id) do |result|
    row = self.from_sql(result) if result
  end
  return row
end

mysql.cr (etc for pg/sqlite)

def select_one(table_name, fields, field, id, &block)
  statement = String.build do |stmt|
    stmt << "SELECT "
    stmt << fields.map{|name, type| "#{table_name}.#{name}"}.join(",")
    stmt << " FROM #{table_name}"
    stmt << " WHERE #{field}=? LIMIT 1"
  end
  open do |db|
    db.query_one? statement, id do |rs|
      yield rs
    end
  end
end

Thoughts?

Can't insert model with Time value using mysql adapter

I have the following table:

CREATE TABLE if NOT EXISTS tokens(
  id BIGINT NOT NULL AUTO_INCREMENT,
  permission BIGINT NOT NULL,
  expires_at TIMESTAMP,
  ttl INT,
  created_at TIMESTAMP,
  updated_at timestamp DEFAULT current_timestamp() ,
  primary key(id),
  FOREIGN KEY (permission)
    REFERENCES permissions(id)
);

and I used the following model to represent it:

require "granite_orm/adapter/mysql"

class Token < Granite::ORM
  adapter mysql
  field permission : Int64
  field expires_at : Time
  field ttl : Int32
  timestamps
end

using this code:

require "./models/*"

token = Token.new

token.permission = Int64.new(11)
token.ttl = 41
token.expires_at = Time.now
token.save

I receive this error: Save Exception: Incorrect datetime value: '' for column 'expires_at' at row 1

I investigated a little and the statement and params passed to mysql are:
INSERT INTO tokens (permission,expires_at,ttl,created_at,updated_at) VALUES (?,?,?,?,?)
[11, 2017-08-13 20:54:06 -0300, 41, "2017-08-13 20:54:06", "2017-08-13 20:54:06"]

It seems that the second params value should be expires_at.to_s("%F %X") in order to be accepted by mysql (as both as created_at and updated_at)

I tried to solve the issue and submit a PR but my knowlodge in crystal macros are not that good yet.
Thanks.

`can't execute OpAssign in a macro` error with Crystal 0.20

OS: macOS 10.12.1
Crystal: 0.20.0 (2016-11-22)
kemalyst-model: latest (shards)

require "./db.cr"
^

in src/db.cr:6: expanding macro

  sql_mapping({
  ^~~~~~~~~~~

in lib/kemalyst-model/src/kemalyst-model.cr:57: can't execute OpAssign in a macro

        {% i += 1 %}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.