elixir-sqlite / ecto_sqlite3 Goto Github PK
View Code? Open in Web Editor NEWAn Ecto SQLite3 adapter.
Home Page: https://hexdocs.pm/ecto_sqlite3
License: MIT License
An Ecto SQLite3 adapter.
Home Page: https://hexdocs.pm/ecto_sqlite3
License: MIT License
When I tried using binary_id
primary keys for my tables, I was surprised to see IDs in elixir's binary format rather than strings. After some research, I found the (undocumented) :binary_id_type
option.
The option supports either :string
(default) or :binary
as values. I was surprised to find in the loader for binary_id
that the :binary
(rather than :string
) type uses Ecto.UUID
(and consequently handles IDs as strings). The actual SQLite column types do seem to match the configuration more intuitively: :binary
-> UUID
column with BLOB
affinity, :string
-> TEXT_UUID
column with TEXT
affinity.
My resulting questions are:
:string
configuration be the one to use Ecto.UUID
(and therefore handles IDs as plain strings). That would be much more intuitive to me.Ecto.UUID
and handle binary ids as strings.:uuid
column type and :uuid_type
configuration play into this? From my quick testing this type seems to always be handled as strings, :uuid_type
only affecting the type of the column in SQLite itself.When I get some clarification on this I'd be more than happy to contribute the necessary documentation (and code if applicable).
I noticed that this adapter doesn't seem to support DDL transactions or a migration lock during ecto migrations. Does that support need to be implemented?
I'm probably missing something, but I figure that a DDL transaction can be started with exclusive mode, and perhaps the migration lock is satisfied with a file lock?
https://github.com/elixir-sqlite/ecto_sqlite3/blob/main/lib/ecto/adapters/sqlite3.ex#L218-L224
I'll be happy to contribute if there's a clear way forward.
Running an exists or not exists query fails. E.g. adding this integration test to test/ecto/integration/crud_test.exs:
test "handles exists subquery" do
account1 = TestRepo.insert!(%Account{name: "Main"})
user1 = TestRepo.insert!(%User{name: "John"}, [])
TestRepo.insert!(%AccountUser{user_id: user1.id, account_id: account1.id})
subquery = from(au in AccountUser, where: au.user_id == parent_as(:user).id, select: 1)
assert [_] = TestRepo.all(from(a in Account, as: :user, where: exists(subquery)))
end
Produces an error:
1) test select handles exists subquery (Ecto.Integration.CrudTest)
test/ecto/integration/crud_test.exs:237
** (Exqlite.Error) near "(": syntax error
SELECT a0."id", a0."name", a0."email", a0."inserted_at", a0."updated_at" FROM "accounts" AS a0 WHERE (exists((SELECT 1 FROM "account_users" AS sa0 WHERE (sa0."user_id" = a0."id"))))
code: assert [_] = TestRepo.all(from(a in Account, as: :user, where: exists(subquery)))
stacktrace:
(ecto_sql 3.7.2) lib/ecto/adapters/sql.ex:760: Ecto.Adapters.SQL.raise_sql_call_error/1
(ecto_sql 3.7.2) lib/ecto/adapters/sql.ex:693: Ecto.Adapters.SQL.execute/5
(ecto 3.7.2) lib/ecto/repo/queryable.ex:219: Ecto.Repo.Queryable.execute/4
(ecto 3.7.2) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
test/ecto/integration/crud_test.exs:244: (test)
The error appears to be that the generated SQL has two parentheses after the EXISTS (exists((
), but sqlite only supports a single exists(
.
If you drill into the column definition you'll see that CHECK
is allowed https://sqlite.org/lang_altertable.html
It doesn't support, exclude although if you can just negate the statement it might suffice.
When using Elixir 1.14.0 with OTP 25, we get a new set of warnings when running mix test
Here are the warnings.
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:29: Ecto.Integration.Post
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:85: Ecto.Integration.Comment
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:109: Ecto.Integration.Permalink
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:133: Ecto.Integration.PostUser
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:151: Ecto.Integration.User
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:182: Ecto.Integration.Custom
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:200: Ecto.Integration.Barebone
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:216: Ecto.Integration.Tag
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:269: Ecto.Integration.Order
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:286: Ecto.Integration.CompositePk
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:307: Ecto.Integration.CorruptedPk
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:322: Ecto.Integration.PostUserCompositePk
warning: Application.get_env/2 is discouraged in the module body, use Application.compile_env/3 instead
deps/ecto/integration_test/support/schemas.exs:339: Ecto.Integration.Usec
Hi, not sure if this is the place to ask, but I'm using ecto_sqlite3
in a Phoenix app. I want to manually load the decimal extension from https://github.com/nalgeon/sqlean, which isn't present in https://github.com/mindreframer/ex_sqlean.
I tried in Application.start
to run
MyApp.Repo.query!("select load_extension('./decimal.so');")
That is not authorized
, and I understand first I would need to do something like
alias Exqlite.Basic
{:ok, conn} = Basic.open("db.sqlite3")
:ok = Basic.enable_load_extension(conn)
I'm not sure how to hook into creation of the database connection when just using Ecto Repo. Any advice? This is not a "bug" more of a stack-overflow usage type question I know...
Hi everyone,
SQLite3 does not currently support async tests in the sandbox. At first, it makes sense because it is single writer, but then I realized that we never commit transactions anyway so we should never be writing. That's when I found SQLite3 supports a feature called "BEGIN CONCURRENT", which allows concurrent transactions, and serializes only the commit operation, which we never do!
Therefore, would it be worth giving a try to support this? Perhaps there could be a config on this project that chooses the transaction type (concurrent or not) and we set it to concurrent with the sandbox.
WDYT?
This was the latest PR merged to elixir-ecto/ecto_sql@b67cdae we should port this into here to keeps parity.
I've seen this - #41
Seems that using the dumper dumps to string that can't be later read properly with SQLite, it stores it like this in a JSON field in SQLite:
"{\"property_type\":\"foo\",\"__type__\":\"something\"}"
instead of like this:
{"property_type": "foo", "__type__": "something"}
Means functions like json_extract etc won't work natively in SQLite.
I believe it's because when dumping it's done this way:
dumper.(:map, map)
So with SQLite, this dumps as a string:
{:ok,
"{\"foo\":\"x\",\"bar\":\"2021-12-03T07:22:16.086910Z\",\"baz\":\"xxx\",\"__type__\":\"something\"}"}
But with Postgres it dumps as a map:
{:ok,
%{
:something => "hello",
"__type__" => :"Whatever"
}}
Not sure if this is an ecto_sqlite3 problem or a polymorphic_embed issue.
It looks like creating normal maps works properly as JSON.
Hey
Love the work you are putting into Exqlite and this adaptor.
Ran into some confusion. The docs mention "ALTER TABLE which Sqlite does not support" which runs counter to the docs: https://sqlite.org/lang_altertable.html.
Or does that language just imply that sqlite doesn't support altering constraints via alter table? I think I ran into an unsupported use of ALTER TABLE during my first tries with exqlite and just want to be certain that I will be able to add/remove fields in the future without too much pain.
I've noticed that large queries are significantly slower via ecto_sqlite3
than through the sqlite3
CLI.
Example:
defmodule Datadoor.Transaction do
use Ecto.Schema
schema "transactions" do
field :transaction_amount, :integer
field :transaction_date, :naive_datetime
timestamps(inserted_at: :created_at)
end
end
Here is the example data of ~650k rows: dev.db.zip
Via Ecto: 22504.3ms
Transaction
|> select([:transaction_date, :transaction_amount])
# |> limit(100_000)
|> Repo.all()
[debug] QUERY OK source="transactions" db=22504.3ms decode=0.8ms queue=0.1ms idle=112.7ms
SELECT t0."transaction_date", t0."transaction_amount" FROM "transactions" AS t0 []
Via Ecto (Raw SQL): 21520.2ms
Ecto.Adapters.SQL.query(
Repo,
"SELECT transaction_date, transaction_amount FROM transactions"
)
[debug] QUERY OK db=21520.2ms decode=1.4ms idle=106.7ms
SELECT transaction_date, transaction_amount FROM transactions []
Via SQLite3 CLI or any other SQLite3 GUI:
sqlite> .timer on
sqlite> SELECT transaction_date, transaction_amount FROM transactions;
Run Time: real 2.680 user 0.423684 sys 0.323269
sqlite>
Is there any good explanation for this performance difference or some obvious steps to improve the performance?
We should be able to match what the postgres and mysql adapters can do. I think we could improve how the lists are built / concatenated. It looks ugly, but it would be faster.
I wonder if there are better ways to build the query messages.
I have a relational setup that I would like to just give me an {:error, changeset}
tuple back for when the foreign record doesn't exist. However, no matter what I put I always get this error:
14:02:56.563 [error] Task #PID<0.559.0> started from #PID<0.557.0> terminating
** (Ecto.ConstraintError) constraint error when attempting to insert struct:
* (foreign_key_constraint)
If you would like to stop this constraint violation from raising an
exception and instead add it as an error to your changeset, please
call `foreign_key_constraint/3` on your changeset with the constraint
`:name` as an option.
The changeset defined the following constraints:
* intervals_schedule_id_fkey (foreign_key_constraint)
...
If I'm understanding right, SQLite knows there is a constraint here, but its unnamed and so all we get back to help is an empty * (foreign_key_constraint)
clue. And since there is no name to match on, Ecto can't reconcile it and things just crash. I've even tried using the name: ""
option, but I think Ecto just ignores the empty string.
Is it even possible to get around this or are we limited by the abilities of SQLite here? I do understand I can use the foreign_key: :off
option in the adapter to just skip validations entirely which is currently my workaround, but I would love to get the {:error, changeset}
return instead.
Caveat: I am in no way a DB expert and this could be 100% user error. My migrations are simple and would love to know if Im just doing things wrong:
create table(:schedules) do
add :name, :string
end
create table(:intervals) do
add :schedule, references(:schedules)
end
I'm not sure if this is the right place to report this issue, but it seems ecto_sqlite 0.6 and 0.7 don't play well with HTTPS support in cowboy? Or maybe Jason or Rancher?
I haven't been able to figure out the cause of the issue, only that it happens when I try to update my ecto_sqlite3 dependency from 0.5.5 to 0.7
[error] Ranch listener ProjectWeb.Endpoint.HTTPS had connection process started with :cowboy_tls:start_link/4 at #PID<0.927.0> exit with reason:
{%Jason.EncodeError{message: "invalid byte 0x85 in <<32, 105, 100, 61, 34, 133, 174, 112, 225, 66, 95, 66, 251, 145, 45, 206, 229, 63, 145, 104, 148, 34>>"}, [{Jason, :encode_to_iodata!, 2, [file: 'lib/jason.ex', line: 199, error_info: %{module: Exception}]}, {Phoenix.Socket.V2.JSONSerializer, :encode!, 1, [file: 'lib/phoenix/socket/serializers/v2_json_serializer.ex', line: 70]}, {Phoenix.Socket, :encode_reply, 2, [file: 'lib/phoenix/socket.ex', line: 686]}, {Phoenix.Socket, :handle_in, 4, [file: 'lib/phoenix/socket.ex', line: 598]}, {Phoenix.Endpoint.Cowboy2Handler, :websocket_handle, 2, [file: 'lib/phoenix/endpoint/cowboy2_handler.ex', line: 134]}, {:cowboy_websocket, :handler_call, 6, [file: '<path_to_project>/deps/cowboy/src/cowboy_websocket.erl', line: 528]}, {:cowboy_http, :loop, 1, [file: '/Users/kreykjalin/Documents/Personal/inner_circle/deps/cowboy/src/cowboy_http.erl', line: 257]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 226]}]}
I've tried to rebuild the project from scratch, including with a new lock file like so:
rm -rf _build deps mix.lock
mix deps.get
mix phx.server
But I still see the error I pasted in above.
The context of the issue: I'm working on a Phoenix LiveView application and wanted to update my dependencies since it's been a while since I did that. All the updates went really smoothly with no issues, but when I try to update ecto_sqlite3 I run into this issue.
I'm running Elixir v1.13.0 on macOS Monterey:
$ elixir --version
Erlang/OTP 24 [erts-12.1.5] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [dtrace]
Elixir 1.13.0 (compiled with Erlang/OTP 24)
Here is my dependency array in my mix.exs
file:
defp deps do
[
{:bcrypt_elixir, "~> 2.3"},
{:phoenix, "~> 1.6"},
{:phoenix_ecto, "~> 4.4"},
{:ecto_sqlite3, "~> 0.5.5"},
{:phoenix_live_view, "~> 0.17.5"},
{:floki, ">= 0.30.0", only: :test},
{:phoenix_html, "~> 3.1"},
{:phoenix_live_reload, "~> 1.3", only: :dev},
{:phoenix_live_dashboard, "~> 0.6"},
{:telemetry_metrics, "~> 0.6"},
{:telemetry_poller, "~> 1.0"},
{:gettext, "~> 0.18"},
{:jason, "~> 1.2"},
{:plug_cowboy, "~> 2.5"},
{:bamboo, "~> 2.2.0"},
{:bamboo_smtp, github: "fewlinesco/bamboo_smtp"},
{:site_encrypt, "~> 0.4"},
{:canada, github: "jarednorman/canada"},
{:nebulex, "~> 2.3"},
{:decorator, "~> 1.4"},
{:esbuild, "~> 0.4", runtime: Mix.env() == :dev}
]
end
Do you have any idea how I can try to figure out what the problem is? My best guess is that something in the session stored in the DB (created via phx.gen.auth
) is not being deserialized correctly? But I really have no clue.
If you don't think this is the right place to report the issue feel free to direct me somewhere else.
Should follow this pattern https://keepachangelog.com/en/1.0.0/
Right now Exqlite
has some "opinionated" recommendations like defaulting FK to on
, but defers to the application developer to configure things like journal_mode
. I think that, at least in the Ecto adapter, that opinionated approach should be extended to the other options, like journal_mode
. This is from my experience reading forums and issues around SQLite where novice developers do not understand that they need to set busy_timeout
and journal_mode
to a better defualt value. This is also reflected in the sentiment in the SQLite forums and maintainers where these options are lauded and only not turned on because of backwards compatability concerns.
What do you think @warmwaffles ?
If so, the question is whether these more sane defaults should be shared across both Exqlite and this adapter. I can see the appeal of the Exqlite adapter being more conservative and less prescriptive, as those who need to use the driver directly probably know what they are doing and would expect the defaults to be nearer the SQLite defaults.
0.7.0 includes a backport of code, which is not yet released upstream.
For all constraints, whether at column or table level, we can simply specify the constraint name ourselves. This allows us to support matching on constraint violations with changesets.
We may be able to do this for removing the constraint_name_hack
we use for unique constraints as well.
@kevinlang I saw that you had a logo made. Let's add it to repos as the logo. Unless you want to explore other potential logos.
Via exqlite 0.6.1
This could be a case of "I dont know what I'm doing", but I can't seem to get past this error when trying to us a fragment and pinned variable in a query:
fc = 101
len = 26
query =
from(
cf in C,
where: c.fc == ^fc and fragment("format LIKE '%?%'", ^len)
)
Repo.all(query)
Error:
15:50:26.087 [debug] QUERY ERROR source="cs" db=0.0ms queue=0.1ms idle=1824.6ms
SELECT c0."id", c0."fc", c0."format", FROM "cs" AS c0 WHERE ((c0."fc" = ?) AND format LIKE '%?%') [101, 26]
** (Exqlite.Error) arguments_wrong_length
SELECT c0."id", c0."fc", c0."format" FROM "cs" AS c0 WHERE ((c0."fc" = ?) AND format LIKE '%?%')
(ecto_sql 3.6.2) lib/ecto/adapters/sql.ex:760: Ecto.Adapters.SQL.raise_sql_call_error/1
(ecto_sql 3.6.2) lib/ecto/adapters/sql.ex:693: Ecto.Adapters.SQL.execute/5
(ecto 3.6.2) lib/ecto/repo/queryable.ex:224: Ecto.Repo.Queryable.execute/4
(ecto 3.6.2) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
But if I hardcode the values, it works:
iex()> query =
...()> from(
...()> cf in C,
...()> where: c.fc == 101 and fragment("format LIKE '%?%'", 26)
...()> )
iex()> Repo.all(query)
[
%C{
__meta__: #Ecto.Schema.Metadata<:loaded, "cs">,
fc: 101,
format: "this has 26 in the format",
id: 2
}
]
Hi!
Thank you for an excellent library.
I'm curios if upsert is supported and specifically if the Repo.insert!(changeset, on_conflict: :nothing) should be working?
Br
Mats
Howdy,
I'm trying to figure out the best way to test my Elixir application using Ecto + a supervision tree. During the test the supervision tree dynamically starts processes which causes test pollution. The tests pass when run individually.
My initial thought was that I should restart the application, but that causes the following problem when trying to checkout a new connection from the Ecto sandbox.
** (DBConnection.ConnectionError) connection not available and request was dropped from queue after 10999ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:
1. Ensuring your database is available and that you can connect to it
2. Tracking down slow queries and making sure they are running fast enough
3. Increasing the pool_size (albeit it increases resource consumption)
4. Allowing requests to wait longer by increasing :queue_target and :queue_interval
See DBConnection.start_link/2 for more information
I get the following error in my logs
17:44:49.279 [error] Exqlite.Connection (#PID<0.950.0>) failed to connect: ** (Exqlite.Error) got :database_open_failed while retrieving Exception.message/1 for %Exqlite.Error{message: :database_open_failed, statement: nil} (expected a string)
This is my custom data case template
defmodule Tai.TestSupport.DataCase do
@moduledoc """
This module defines the setup for tests requiring
access to the application's data layer.
You may define functions here to be used as helpers in
your tests.
Finally, if the test case interacts with the database,
we enable the SQL sandbox, so changes done to the database
are reverted at the end of every test. If you are using
PostgreSQL, you can even run database tests asynchronously
by setting `use Tai.DataCase, async: true`, although
this option is not recommended for other databases.
"""
use ExUnit.CaseTemplate
using do
quote do
alias Tai.NewOrders.OrderRepo
alias Tai.TestSupport.Mocks
import Ecto
import Ecto.Changeset
import Ecto.Query
import Tai.TestSupport.DataCase
import Tai.TestSupport.Mock
import Tai.TestSupport.Factories.NewOrderSubmissionFactory
import Tai.TestSupport.Factories.NewOrderFactory
import Tai.TestSupport.Factories.OrderTransitionFactory
import Tai.TestSupport.Factories.FailedOrderTransitionFactory
end
end
setup tags do
Application.stop(:tai)
{:ok, _} = Application.ensure_all_started(:tai)
:ok = Ecto.Adapters.SQL.Sandbox.checkout(Tai.NewOrders.OrderRepo)
unless tags[:async] do
Ecto.Adapters.SQL.Sandbox.mode(Tai.NewOrders.OrderRepo, {:shared, self()})
end
start_supervised!(Tai.TestSupport.Mocks.Server)
:ok
end
@doc """
A helper that transforms changeset errors into a map of messages.
assert {:error, changeset} = Accounts.create_user(%{password: "short"})
assert "password is too short" in errors_on(changeset).password
assert %{password: ["password is too short"]} = errors_on(changeset)
"""
def errors_on(changeset) do
Ecto.Changeset.traverse_errors(changeset, fn {message, opts} ->
Regex.replace(~r"%{(\w+)}", message, fn _, key ->
opts |> Keyword.get(String.to_existing_atom(key), key) |> to_string()
end)
end)
end
end
Right now we hook into the :configure
function we handle off to DBConnection.child_spec
to sleep a random time between 0-50ms. But that randomness can "fail" and we can still hit a "database is locked" issue if two processes both roll the same die.
A better solution is to use the :pool_index
option provided to us then to more evenly space out the sleeping (e.g., each connection spaced ~5ms apart). That should eliminate the "database is locked" error log for nearly every case.
There is also a TODO there around making sure we respect any :configure
options passed in, which we will also want to address. I'm not sure if that would ever happen, though.
Is there any way to specify that a table's primary key should be AUTOINCREMENT?
Hey
I think it's possible to use reserved SQL keywords as field names and such if properly escaped. Ran into a migration not running due to naming it order
.
Figured I'd let you know.
Currently, the function checks if the database file exists. This returns false for in-memory.
Which in turn fails Phoenix.Ecto.CheckRepoStatus
during development.
I would like to see if we can make the function return true instead.
I've got a table that looks like this:
CREATE TABLE IF NOT EXISTS "things" ("id" INTEGER PRIMARY KEY, "thing_id" INTEGER NOT NULL, "user_id" INTEGER NOT NULL, "timestamp" TEXT_DATETIME NOT NULL);
With data that looks like:
1|1|2|2016-03-09 23:26:46
2|2|3|2016-03-11 18:56:01
3|3|2|2016-03-08 15:24:38
4|4|2|2016-03-08 14:23:26
5|5|2|2016-03-08 10:02:39
6|6|2|2016-03-07 20:18:08
7|7|2|2016-03-02 15:30:22
8|8|2|2016-03-01 13:59:31
9|9|2|2016-02-25 19:02:16
10|10|2|2016-02-24 17:49:14
...
And a query that's very similar to this:
Thing
|> select([f], {min(f.timestamp), f.thing_id})
|> where([f], f.user_id in ^user_ids)
|> where([f], fragment("(?, ?) < (?, ?)", f.timestamp, f.thing_id, ^since_timestamp, ^since_thing_id))
|> group_by([f], f.thing_id)
|> order_by([f], desc: min(f.timestamp), desc: f.thing_id)
|> limit(^count)
I've been tearing my hair out trying to work out why it works fine in my SQLite console, but through Ecto it doesn't seem to produce the right results - the dates of the result are incorrect. I was passing a DateTime
in as since_timestamp
, but I've discovered the query only works correctly if I pass in the same thing in as an ISO8601 String. So "2022-05-29 06:34:29"
instead of ~U[2022-05-29 06:34:29Z]
.
I guessing this is something to do with how a DateTime
is cast into a string into Ecto, which is different to the string format in the database. It's really difficult to debug, because I can't see exactly what query is being produced.
Let me know if I can provide any more information or a test case.
This will likely require support from exqlite
for named paramters: https://www.sqlite.org/c3ref/bind_blob.html
Exqlite will need to handle taking in a parameter name and finding the correct parameter index via sqlite3_bind_parameter_index()
when it does its binding, if I am understanding correctly.
Right now we only have our thundering herd fix if no :configure is defined. We should also handle the case where it is, wrapping it with out sleep fix.
To be pursued once we hit 1.0
I think in the docs in the Limitation we should add a simple example of how a schemaless query would work for sqlite3.
I'm not really sure this is an issue with this library, but not using an in memory db works just fine.
Run using elixir script.exs
Mix.install([:ecto_sql, :ecto_sqlite3])
Application.put_env(:my_app, Repo, database: ":memory:", migration_lock: false)
defmodule Repo do
use Ecto.Repo, otp_app: :my_app, adapter: Ecto.Adapters.SQLite3
end
:ok = Ecto.Adapters.SQLite3.storage_up(Repo.config())
Repo.start_link()
defmodule Migration do
use Ecto.Migration
def change do
create table("abc", primary_key: false) do
add(:id, :binary_id, null: false)
add(:uuid, :binary_id, null: false)
timestamps()
end
create table(:testing) do
add :company, :string
add :name, :string
add :vintage, :string
add :drunk_at, :utc_datetime
timestamps()
end
create index(:testing, [:company])
end
end
Ecto.Migrator.up(Repo, 0, Migration, log: :info, skip_table_creation: false)
** (Exqlite.Error) no such table: schema_migrations
SELECT CAST(s0."version" AS INTEGER) FROM "schema_migrations" AS s0
(ecto_sql 3.8.3) lib/ecto/adapters/sql.ex:932: Ecto.Adapters.SQL.raise_sql_call_error/1
(ecto_sql 3.8.3) lib/ecto/adapters/sql.ex:847: Ecto.Adapters.SQL.execute/6
(ecto 3.8.4) lib/ecto/repo/queryable.ex:221: Ecto.Repo.Queryable.execute/4
(ecto 3.8.4) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
(ecto_sql 3.8.3) lib/ecto/migrator.ex:536: Ecto.Migrator.lock_for_migrations/4
Howdy,
I'm trying to use the polymorphic_embed library with the SQLite3 ecto adapter and I'm running into an error loading the map field. It works correctly with the ecto postgres adapter. It seems like ecto is expecting JSON fields to be deserialized from a string to a struct.
1) test returns the oldest 25 order transitions by default (Tai.Orders.SearchTransitionsTest)
apps/tai/test/tai/orders/search_transitions_test.exs:4
** (FunctionClauseError) no function clause matching in PolymorphicEmbed.do_get_polymorphic_module_from_map/3
The following arguments were given to PolymorphicEmbed.do_get_polymorphic_module_from_map/3:
# 1
"{\"last_received_at\":\"2021-07-23T06:42:10.671042Z\",\"last_venue_timestamp\":\"2021-07-23T06:42:10.671036Z\",\"__type__\":\"cancel\"}"
# 2
"__type__"
# 3
[%{identify_by_fields: [], module: Tai.Orders.Transitions.AcceptCreate, type: "accept_create"}, %{identify_by_fields: [], module: Tai.Orders.Transitions.VenueCreateError, type: "venue_crea
te_error"}, %{identify_by_fields: [], module: Tai.Orders.Transitions.RescueCreateError, type: "rescue_create_error"}, %{identify_by_fields: [], module: Tai.Orders.Transitions.Open, type: "open"}, %
{identify_by_fields: [], module: Tai.Orders.Transitions.PendCancel, type: "pend_cancel"}, %{identify_by_fields: [], module: Tai.Orders.Transitions.AcceptCancel, type: "accept_cancel"}, %{identify_b
y_fields: [], module: Tai.Orders.Transitions.VenueCancelError, type: "venue_cancel_error"}, %{identify_by_fields: [], module: Tai.Orders.Transitions.RescueCancelError, type: "rescue_cancel_error"},
%{identify_by_fields: [], module: Tai.Orders.Transitions.Cancel, type: "cancel"}, %{identify_by_fields: [], module: Tai.Orders.Transitions.PendAmend, type: "pend_amend"}, %{identify_by_fields: [],
module: Tai.Orders.Transitions.AcceptAmend, type: "accept_amend"}, %{identify_by_fields: [], module: Tai.Orders.Transitions.VenueAmendError, type: "venue_amend_error"}, %{identify_by_fields: [], m
odule: Tai.Orders.Transitions.RescueAmendError, type: "rescue_amend_error"}, %{identify_by_fields: [], module: Tai.Orders.Transitions.Amend, type: "amend"}, %{identify_by_fields: [], module: Tai.Or
ders.Transitions.Fill, type: "fill"}, %{identify_by_fields: [], module: Tai.Orders.Transitions.PartialFill, type: "partial_fill"}, %{identify_by_fields: [], module: Tai.Orders.Transitions.Expire, t
ype: "expire"}, %{identify_by_fields: [], module: Tai.Orders.Transitions.Reject, type: "reject"}, %{identify_by_fields: [], module: Tai.Orders.Transitions.Skip, type: "skip"}]
Attempted function clauses (showing 1 out of 1):
defp do_get_polymorphic_module_from_map(%{} = attrs, type_field, types_metadata)
code: search_order_transitions = Tai.Orders.search_transitions(order.client_id, nil)
stacktrace:
(polymorphic_embed 1.6.3) lib/polymorphic_embed.ex:286: PolymorphicEmbed.do_get_polymorphic_module_from_map/3
(polymorphic_embed 1.6.3) lib/polymorphic_embed.ex:248: PolymorphicEmbed.load/3
(ecto 3.6.2) lib/ecto/type.ex:894: Ecto.Type.process_loaders/3
(ecto 3.6.2) lib/ecto/repo/queryable.ex:406: Ecto.Repo.Queryable.struct_load!/6
(ecto 3.6.2) lib/ecto/repo/queryable.ex:238: anonymous fn/5 in Ecto.Repo.Queryable.preprocessor/3
(elixir 1.11.4) lib/enum.ex:1411: Enum."-map/2-lists^map/1-0-"/2
(ecto 3.6.2) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
(ecto 3.6.2) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
test/tai/orders/search_transitions_test.exs:8: (test)
I've created a branch to reproduce to problem
https://github.com/fremantle-industries/tai/tree/sqlite3-polymorphic-embed-invalid-map-field
$ mix test test/tai/orders/search_transitions_test.exs
The embed field is defined in the OrderTransition
schema
I'm currently in the process of migrating an sqlite_ecto2 project to this adapter and I noticed that this one does seem to store utc_datetime as iso strings with tz information (with trailing Z
), while sqlite_ecto2
doesn't store an offset.
The comment here seems to suggest this is not yet set in stone:
ecto_sqlite3/lib/ecto/adapters/sqlite3/codec.ex
Lines 49 to 54 in 886bd6f
I'd argue that the adapter doesn't want to push timezone knowledge into the db, just like the other ecto adapters do. Neither postgres nor mysql natively support a datetime column, which does retain timezone information. Even though timestamptz
for postgres does convert between session timezone and the column value automatically, it also just stores UTC and not the input timezone – and ecto defaults to timestamp
columns on postgres anyways. Additionally the :utc_datetime
ecto type already makes sure the datetime is using UTC as timezone at runtime, so other timezones are not allowed for such columns in the first place.
Would you be open to adjusting the handling for :utc_datetime
?
While making sure we don't run into compatibility issues migrating from the sqlite_ecto2 adapter to this one I hit another difference: UUIDs.
Currently this adapter seems to store uuids as strings and the old adapter did opt for the smaller option of raw binary data. I found this one in the testsuite:
ecto_sqlite3/test/ecto/adapters/sqlite3/connection_test.exs
Lines 950 to 951 in 8ff7c8d
I'm wondering if you already decided for one way or the other? It would be really great if this adapter could be switched to using the binary-16 storage. In the embedded world storing data in a more than twice as big format doesn't sound appealing.
I've added the appropriate loaders/2
and dumpers/2
callback locally to use Ecto.UUID
and it seems nothing in the testsuite broke and it made my compatibility check between the adapters work.
In sqlite EXPLAIN QUERY PLAN …
and EXPLAIN …
return different values. Currently only the latter is supported with Repo.explain(:all, query)
, but I'd expect that the first one is actually the more common need, why people reach for explain. I think both options should be available (as enabled via the opts
), but I think EXPLAIN QUERY PLAN
should be the default, rather than just EXPLAIN
.
Also need to add documentation about how to run.
Would be nice to have the results in a nicely formatted markdown file or something in the Repo for easy reference.
I've a usecase for a read-then-update flow and I was considering using an immediate transaction to prevent the race condition of a concurrent transaction changing the db after the initial read, but before updates were written. I saw exqlite has some code around those, but I'm wondering if that can be exposed to ecto based transactions.
Hi, I already posted this issue in the ecto repository, but @josevalim pointed me to this repository with the guess, that there might be additional quotes added around the index name in the migration.
When creating an unique index with a custom expression as shown as in Ecto.Migration docs
create index(:branches, ["(lower(name))"], name: :branches_lower_name_index, unique: true)
and adding the unique_constraint
check to the changeset with the name of the unique index
...
|> unique_constraint(:name, name: :branches_lower_name_index)
it raises the following Ecto.ConstraintError
** (Ecto.ConstraintError) constraint error when attempting to insert struct:
* index 'branches_lower_name_index'_index (unique_constraint)
If you would like to stop this constraint violation from raising an
exception and instead add it as an error to your changeset, please
call `unique_constraint/3` on your changeset with the constraint
`:name` as an option.
The changeset defined the following constraints:
* branches_lower_name_index (unique_constraint)
(ecto 3.8.4) lib/ecto/repo/schema.ex:783: anonymous fn/4 in Ecto.Repo.Schema.constraints_to_errors/3
(elixir 1.13.2) lib/enum.ex:1593: Enum."-map/2-lists^map/1-0-"/2
(ecto 3.8.4) lib/ecto/repo/schema.ex:768: Ecto.Repo.Schema.constraints_to_errors/3
(ecto 3.8.4) lib/ecto/repo/schema.ex:749: Ecto.Repo.Schema.apply/4
(ecto 3.8.4) lib/ecto/repo/schema.ex:367: anonymous fn/15 in Ecto.Repo.Schema.do_insert/4
Please notice the strange name of the index 'branches_lower_name_index'_index
The expected behavior is that an error for the specified field is added to the changeset, instead of raising an Ecto.ConstraintError
.
Many thanks 👍
Per 3.35.4 update, specifying table in returning clause is not supported. It was never supported correctly, and lead to errors, which is why those errors were made explicit, leading to some of our tests failing.
https://sqlite.org/forum/forumpost/cdc28fec1a
We will need to update things to not do e.g., t.*
or even t.id
.
https://www.sqlite.org/draft/stricttables.html
Is there anything I need to do to force strict to be always on.
I've seen this error message crop up occasionally with the CI. No clue why.
/home/runner/work/_temp/893025cb-7e78-4c74-b780-ed4862c9d2b0.sh: line 1: 2370 Bus error (core dumped) EXQLITE_INTEGRATION=true mix test --trace
Hi there
I have a few questions and possibly an interesting proposal.
First of all, amazing work you;ve put in this adapter and the exqlite version! I'm in love with both and have migrated from sqlitex.
Is there a way to use this adapter and run normal ecto adapters and for it to just generate the sql and possible pass it on to somewhere else? IE another server. Maybe a behaviour or something similar to switch between the default of running exqlite directly or passing it somewhere else. A simple abstraction would suffice. Such as another server which is running exqlite, or a local instance which handles the exqlite wrapped in (gen)servers (or something better) for concurrency. There is more to say about this yet I'm trying to be brief.
The rationale:
Last year I began working on a project similar to Actordb.com in pure Elixir.
I've used sqlitex for it and mixed code from sqlitex, the mysql and postgresql ecto adapters. The idea is to have a server which you can use independently with many sqlite connections via the Actor model (since Elixir and Erlang support this greatly). Most of the apps I've built can deal with this actor modular approach quite well.
I;m thinking out loud now with an eye on using this instead of my own ecto adapter. As this will simplify things instead of wasting effort in duplicating and reimplementing the wheel from zero. Since Elixir's built in mind with modularity.
My current implementation prototype is located at: https://fossil.lba.im/glow/dir?ci=tip (WIP)
This is more of a question and invitation to share ideas and thoughts.
Some references and ideas:
https://expensify16.rssing.com/chan-10870710/latest-article4-live.php - Scaling Sqlite to 4 million queries per second
https://review.firstround.com/your-database-is-your-prison-heres-how-expensify-broke-free
http://www.actordb.com/
Thanks for your understanding
This would allow us to unfork our integration test code before the next releases of ecto and ecto_sql respectively. Also will help us catch pre-release issues in those libraries.
I'm trying to create unique_index on existing table field, but there's an error
Migration file:
defmodule App.Repo.Migrations.PayloadUniqueConstraint do
use Ecto.Migration
def change do
create unique_index(:payloads, [:name])
end
end
Error:
$ mix ecto.migrate
18:56:37.589 [info] == Running 20210614154426 App.Repo.Migrations.PayloadUniqueConstraint.change/0 forward
18:56:37.595 [info] create index payloads_name_index
** (Exqlite.Error) UNIQUE constraint failed: payloads.name
CREATE UNIQUE INDEX payloads_name_index ON payloads (name)
(ecto_sql 3.6.1) lib/ecto/adapters/sql.ex:749: Ecto.Adapters.SQL.raise_sql_call_error/1
(elixir 1.12.0) lib/enum.ex:1553: Enum."-map/2-lists^map/1-0-"/2
(ecto_sql 3.6.1) lib/ecto/adapters/sql.ex:836: Ecto.Adapters.SQL.execute_ddl/4
(ecto_sql 3.6.1) lib/ecto/migration/runner.ex:343: Ecto.Migration.Runner.log_and_execute_ddl/3
(ecto_sql 3.6.1) lib/ecto/migration/runner.ex:117: anonymous fn/6 in Ecto.Migration.Runner.flush/0
(elixir 1.12.0) lib/enum.ex:2356: Enum."-reduce/3-lists^foldl/2-0-"/3
(ecto_sql 3.6.1) lib/ecto/migration/runner.ex:116: Ecto.Migration.Runner.flush/0
(stdlib 3.15.1) timer.erl:166: :timer.tc/1
(ecto_sql 3.6.1) lib/ecto/migration/runner.ex:25: Ecto.Migration.Runner.run/8
(ecto_sql 3.6.1) lib/ecto/migrator.ex:324: Ecto.Migrator.attempt/8
(ecto_sql 3.6.1) lib/ecto/migrator.ex:251: anonymous fn/5 in Ecto.Migrator.do_up/5
(ecto_sql 3.6.1) lib/ecto/migrator.ex:295: anonymous fn/6 in Ecto.Migrator.async_migrate_maybe_in_transaction/7
(ecto_sql 3.6.1) lib/ecto/migrator.ex:310: Ecto.Migrator.run_maybe_in_transaction/4
(elixir 1.12.0) lib/task/supervised.ex:90: Task.Supervised.invoke_mfa/2
(elixir 1.12.0) lib/task/supervised.ex:35: Task.Supervised.reply/5
(stdlib 3.15.1) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.