avescodes / conformity Goto Github PK
View Code? Open in Web Editor NEWA Clojure/Datomic library for idempotently transacting norms into your database – be they schema, data, or otherwise
A Clojure/Datomic library for idempotently transacting norms into your database – be they schema, data, or otherwise
I looked at the sample schema in resources/sample4.edn
and tried to expand the schema by adding a definition in the :txes
for :test4/norm1
so it becomes:
{:test4/norm1
{:txes [[{:db/ident :test/attribute0
:db/doc "test attribute 0"
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/id #db/id[:db.part/db]
:db.install/_attribute :db.part/db}
{:db/ident :test/attribute1
:db/doc "test attribute 1"
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/id #db/id[:db.part/db]
:db.install/_attribute :db.part/db}
{:db/ident :test/user
:db/doc "test user"
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/id #db/id[:db.part/db]
:db.install/_attribute :db.part/db}
{:db/ident :test/transaction-metadata
:db/doc "annotates the transaction with metadata"
:db/id #db/id[:db.part/user]
:db/fn #db/fn
{:lang :clojure
:params [db metadata]
:code [(assoc metadata :db/id #db/id[:db.part/tx])]}}]]}}
I've added a :test/attribute0
at index 0 for the nested vector.
If I call ensure-conforms
with the old schema and then subsequently with the schema from above, it seems :test/attribute0
is not present in the db. Is this to be expected?
I think the cause is in the cond
in reduce-norms
:
(defn reduce-norms
"Reduces norms from a norm-map specified by a seq of norm-names into
a transaction result accumulator"
[acc conn norm-attr norm-map norm-names]
(let [sync-schema-timeout (:conformity.setting/sync-schema-timeout norm-map)]
(reduce
(fn [acc norm-name]
(let [{:keys [txes requires]} (get norm-map norm-name)]
(cond (conforms-to? (db conn) norm-attr norm-name (count txes))
acc
As I understand it, in this case, txes
will be a vector containing one element which is itself a vector (containing several elements) and as I understand conforms-to?
it'll also find exact one element and subsequently (conforms-to? ...)
returns true and the newl element with :test/attribute0
is then not added to the db.
I tried changing the vector :txes
to be a vector containing several vectors each of which contains a map like:
{:db/ident :test/attribute0
:db/doc "test attribute 0"
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/id #db/id[:db.part/db]
:db.install/_attribute :db.part/db}
The result is that the :test/attribute
is available in the db as I expected, but maybe my expectations are a little off on this?
Trying to figure out how to do lookup refs for seed data, this is what I have for my seeds:
{:seed {:txes [[{:db/id #db/id [:db.part/user]
:email "[email protected]"
:email-verified? true
:password "kdjfkjfdkjiamahashlol"}]
]}
:seed2 {:txes [[{:db/id #db/id [:db.part/user]
:title "Thing release"
:content "Today we announce the release of thing"
:slug "thing-release"
:date #inst "2013-06-08T01:00:00Z"
:person [:email "[email protected]"]}
{:db/id #db/id [:db.part/user]
:title "We open the first thing micronation"
:content "We've opened a micronation between france and germany. Please join us. If you're too far, we'll open another soon. Closer to the North pole."
:slug "thing-micronation"
:date #inst "2016-02-08T03:50:00Z"
:person [:email "[email protected]"]}]]}
But the lookup fails. I'm on 0.4.0
and I have no idea what is wrong, any pointers? Or just not possible?
This isn't a big deal, but I am assuming that :confirmity was not the intended value here:
(def default-conformity-attribute :confirmity/conformed-norms)
Assuming that schema starts off as an EDN file/resource versioned in git, how to best manage schema evolution in conjunction with conformity?
As per the README, norms are transacted once.
Is a new conformity-attr
used (4-arity call to ensure-conforms
), generate new norm names (per schema "version"), or something else entirely?
Any feedback from any & all much appreciated!
When pulling conformity into a project that also references Datomic pro, a dependency conflict arises due to the reader literals. If you put in an :exclusions for datomic-free, things work fine.
I have a migration that looks like this:
{:myapp/migration-2017-08-08-migrate-auth0-identities
{:txes-fn 'myapp.db.migrate-auth0-identities/migrate}}
This ran fine the first time I ran it, confirmed by looking at my data, and by finding :myapp/migration-2017-08-08-migrate-auth0-identities
in :confirmity/conformed-norms
. However, conformity keeps running this migration, and when it produces an empty list of transactions, it fails. I would expect this migration not to be run again - am I having the wrong expectations, or is this a bug? This is with 0.5.1
Hi Ryan,
recently realized it would be kinda nice if conformity returned some kinda output about what datoms changed (e.g. which norms were transacted, and the result of the transact
call for each one). Are you open to a patch for that? I'm happy making one, just wanted to check if that made sense to you.
During the development of changes to a schema it would be nice if one could try
new migrations on a live database without actually transacting the migrations to it.
Using datomic.api/with
a db could be obtained with the new migrations without changing the actual database, so one can test if the new migration is working correctly.
It would enable a faster workflow than making copies of databases and testing the migrations on those copies.
Thanks for the nice library.
I'm mentioning these in case they are oversights or the result of an unintentional force push:
How do I define entity tempids in a conformity .edn file that I can reference in other entities lower down in the same :txes list?
Duct's migrator.ragtime has a really neat feature where it hashes the migration, and if the hash of the migration changes, it rolls back the migration, then reapplies it. This is really useful in development scenarios, where you want to be able to experiment with different schemas, or incrementally modify an existing one during development.
Would something like this make sense for Conformity?
For some reason, I am not seeing a new attribute in my DB after a call to ensure-conforms
{:app/schema
{:txes [[{:db/id #db/id[:db.part/db]
:db/ident :person/name
...}
{:db/id #db/id[:db.part/db]
:db/ident :person/petName
:db/valueType :db.type/string
...}]]}}
Then, I change the schema to below, and it seems the new attribute doesn't exist.
{:app/schema
{:txes [[{:db/id #db/id[:db.part/db]
:db/ident :person/name
...}
{:db/id #db/id[:db.part/db]
:db/ident :person/pet
:db/valueType :db.type/ref
...}]]}}
My git actions always fail once I've run code that uses read-resource
, I believe this is because it leaves an open reader. Could it be updated to call .close
when it's done with the reader?
Hi,
We recently ran into an issue caused by two things happening at the same time: 1. forgetting to add :requires
, 2. (keys norm-map)
returning keys in a non-deterministic order.
What happened was that we added a new norm to our schema file, and suddenly tests in CI started to break. The reason for this was a much earlier norm without :requires
and the fact that when we added a new key to the norm-map it seemed to trigger Clojure to change the internal order of the map, thus (keys norm-map
) returned the keys in an order that provoked this issue.
(Just to note that this isn't really a bug in conformity, but our mistake of forgetting to add the :requires
)
To prevent this happening in the future, I was planning to avoid the non-deterministic behavior by using the 3-arity version of the ensure-conforms
and pass a sequence of norm-names
, assuming that the norms are then transacted deterministically in the order or norm-names
seq. However, the docstring of the norm-names
doesn't mention anything about the order:
norm-names (optional) A collection of names of norms to conform to.
Will use keys of norm-map if not provided.
Question: By reading the code I can see that norms are reduced in the order of norm-names
, but is this a behaviour I can rely on?
Change request: ☝️ If yes, would it be ok to add that in the docstring?
Hi,
I understand that norms are transacted only once. What would the recommended workflow be for a modified norm to be conformed without re-creating the db?
I'm thinking of a production environment with existing entities in which I would like to add an attribute to the schema using a norm.
Would it be as easy as duplicating the norm with the new attribute and using a different name?
Eduardo
By default conformity attached an extra info which contains :conformity/conformed-norms
for each transaction. I need to attach my own extra info in the said transaction, is it doable?
Given the current implementation, there is a race condition where, if two processes using Conformity start up at the same time, the same norm could be written twice.
This would be easy enough to avoid by using a transactor function. Is there some reason Conformity doesn't work this way?
The docs say do this:
;; resources/something.edn
{:my-project/something-else-schema
{:txes-fn 'my-project.migrations.txes/backport-bar-attr-to-entities-with-foo}}
This does not work for me though, and instead I have to do this:
;; resources/something.edn
{:my-project/something-else-schema
{:txes-fn my-project.migrations.txes/backport-bar-attr-to-entities-with-foo}}
Notice the quote
I'm sorry if this would be obvious to someone with more datomic experience, however i have trouble understanding the structure of the norms map.
the example on your github page is
{:my-project/something-schema
{:txes [[{:db/id #db/id [:db.part/db]
:db/ident :something/title
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/index false
:db.install/_attribute :db.part/db}]]}}
i understand the :txes
as a bunch of schemas to be put into datomic, but i'm not really understanding the wrapping and referencing in :my-project/scheme-name
an explanation of what your program is doing and why would be welcome.
why isn't :txes
a single array? other examples of putting schemas into datomic use an array of hashmaps in their transact.
anyway, sorry if this question is caused by my lack of understanding of datomic (just started with it a few days ago and i'm running into lots of problems with it)
As of com.datomic/datomic-pro 0.9.4470
, Schema Alterations are a thing.
One feature is the ability to rename an ident, like so:
[{:db/id :person/name
:db/ident :person/full-name}]
I tried to do so with conformity:
{:app/v1
{:txes [[:db/id #db/id[:db.part/db]
:db/ident :person/name]]}
:app/v2
{:txes [[{:db/id :person/name
:db/ident :person/full-name}]]}}
However, when calling ensure-conforms
with a fresh db (datomic:mem
for instance), I received the following exception:
datomic.impl.Exceptions$IllegalArgumentExceptionInfo: :db.error/not-an-entity Unable to resolve entity: :person/name in datom [:person/name :db/ident :person/full-name]
I believe this is due to the fact that each norm in the schema is a key-value pair in a map. Thus, when ensure-conforms
is given the map, it cannot guarantee the order in which norms are transacted. Therefor, the rename fails because the :person/name
ident does not yet exist.
from the docs re schema alterations:
All alterations happen synchronously, except for adding an AVET index. If you want to know when the AVET index is available, call syncSchema. In order to add :db/unique, you must first have an AVET index including that attribute.
If I want to add :db.unique/value
to an existing attribute, and I do that with conformity (as I'd expect it to work), with two "norms", one that turns on the AVET
index, and one that turns on the :db.unique/value
, I'll get an exception as conformity transacts the second norm.
An way to fix this in conformity would be to call sync-schema
(ideally with an optionally configured timeout on the future deref) between every norm transaction. You have any feedback or thoughts on that? One can also work around it by running conformity migrations twice, and not specifying the unique
change until the second deploy, and calling sync-schema
manually after the first one, but it seems like this is a thing that could exist in conformity.
Interested in a pull request?
I think you may have for got to include the namespace for the two calls to datomic.api/db
in this snippet of the README.
Not a huge deal though.
(require '[datomic.api :as d])
(def uri "datomic:mem://my-project")
(d/create-database uri)
(def conn (d/connect uri))
; ...
(c/has-attribute? (db conn) :something/title)
; -> false
(c/ensure-conforms conn norms-map [:my-project/something-schema])
(c/has-attribute? (db conn) :something/title)
; -> true
Hi there! I have a lot of norms in my project, some of thems are pure data migrations, grab some data from db and return update txes. It works well if there is already some data in DB. What I understand so far, when I'm trying to initiate a new db (eg inmemory one for tests), such transactions may returns empty collection of txes (because there is no data to update) and conformity throws and error like that:
Execution error (ExceptionInfo) at io.rkn.conformity/handle-txes (conformity.clj:200).
No transactions provided for norm :aceplace/set-addon-owner
How can I prevent this behavior or maybe I didn't understand something essencial about conformity usage?
Is there support for batching transactions where you specify a norm as a function that returns a set of transaction data? In a lot of cases for us, the migration data is far too large to transact as a single transaction.
This is what I'm using today to partition it into many smaller transactions:
(defn run-update
"Run an update query; likely related to a schema migration"
[connection q]
(let [db (d/db connection)]
(->>
(q db)
(partition-all 100)
(reduce (fn [running-total tx-data]
(let [total (+ (count tx-data) running-total)]
(log/infof "Running update tx %d" total)
(d/transact connection tx-data)
total)) 0))))
in your example you do
(c/ensure-conforms conn norms-map [:my-project/something-schema])
i'm curious as to why i need to use the last argument [:my-project/something-schema]
i think this is referring to a key on the norms map
{:my-project/something-schema
{:txes [[{:db/id #db/id [:db.part/db]
:db/ident :something/title
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/index false
:db.install/_attribute :db.part/db}]]}}
however now i have duplicate code... my schema and my input to this ensure function have to be changed when i update a schema.
why does this exist?
my norms map has the name of my norm in it, why do i have to repeat myself?
I think i may be missing something important in the design of this library.
It would be nice if the README added some documentation for the read-resource function, introduced in PR 21.
I ended up implementing something similar from scratch, though I would have used this if I had noticed it in the documentation.
The Datomic Client API does not support installing database functions. Yet clients using the Client API are just as likely to benefit conforming the database.
I appreciate that this is a difficult problem in the general case as the database fn being created is central to the transactional nature of conforming the schema.
Would it be possible to create a "port" of the bulk of conformity to use the Client API? There would still need to be a very small function to install the database function, but that could be a one-time task. And for ad-hoc usage, there is obviously a loophole for getting the fn installed on initialization.
Perhaps this issue can serve as a starting point to discuss ways of working around this fundamental problem.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.