Git Product home page Git Product logo

prisma-engines's Introduction

Prisma Engines

Query Engine Schema Engine + sql_schema_describer Cargo docs

This repository contains a collection of engines that power the core stack for Prisma, most prominently Prisma Client and Prisma Migrate.

If you're looking for how to install Prisma or any of the engines, the Getting Started guide might be useful.

This document describes some of the internals of the engines, and how to build and test them.

What's in this repository

This repository contains four engines:

  • Query engine, used by the client to run database queries from Prisma Client
  • Schema engine, used to create and run migrations and introspection
  • Prisma Format, used to format prisma files

Additionally, the psl (Prisma Schema Language) is the library that defines how the language looks like, how it's parsed, etc.

You'll also find:

  • libs, for various (small) libraries such as macros, user facing errors, various connector/database-specific libraries, etc.
  • a docker-compose.yml file that's helpful for running tests and bringing up containers for various databases
  • a flake.nix file for bringing up all dependencies and making it easy to build the code in this repository (the use of this file and nix is entirely optional, but can be a good and easy way to get started)
  • an .envrc file to make it easier to set everything up, including the nix shell

Documentation

The API docs (cargo doc) are published on our fabulous repo page.

Building Prisma Engines

Prerequisites:

  • Installed the latest stable version of the Rust toolchain. You can get the toolchain at rustup or the package manager of your choice.
  • Linux only: OpenSSL is required to be installed.
  • Installed direnv, then direnv allow on the repository root.
    • Make sure direnv is hooked into your shell
    • Alternatively: Load the defined environment in ./.envrc manually in your shell.
  • For m1 users: Install Protocol Buffers

Note for nix users: it should be enough to direnv allow. How to build:

To build all engines, simply execute cargo build on the repository root. This builds non-production debug binaries. If you want to build the optimized binaries in release mode, the command is cargo build --release.

Depending on how you invoked cargo in the previous step, you can find the compiled binaries inside the repository root in the target/debug (without --release) or target/release directories (with --release):

Prisma Component Path to Binary
Query Engine ./target/[debug|release]/query-engine
Schema Engine ./target/[debug|release]/schema-engine
Prisma Format ./target/[debug|release]/prisma-fmt

Prisma Schema Language

The Prisma Schema Language is a library which defines the data structures and parsing rules for prisma files, including the available database connectors. For more technical details, please check the library README.

The PSL is used throughout the schema engine, as well as prisma format. The DataModeL (DML), which is an annotated version of the PSL is also used as input for the query engine.

Query Engine

The Query Engine is how Prisma Client queries are executed. Here's a brief description of what it does:

  • takes as inputs an annotated version of the Prisma Schema file called the DataModeL (DML),
  • using the DML (specifically, the datasources and providers), it builds up a GraphQL model for queries and responses,
  • runs as a server listening for GraphQL queries,
  • it translates the queries to the respective native datasource(s) and returns GraphQL responses, and
  • handles all connections and communication with the native databases.

When used through Prisma Client, there are two ways for the Query Engine to be executed:

  • as a binary, downloaded during installation, launched at runtime; communication happens via HTTP (./query-engine/query-engine)
  • as a native, platform-specific Node.js addon; also downloaded during installation (./query-engine/query-engine-node-api)

Usage

You can also run the Query Engine as a stand-alone GraphQL server.

Warning: There is no guaranteed API stability. If using it on production please be aware the api and the query language can change any time.

Notable environment flags:

  • RUST_LOG_FORMAT=(devel|json) sets the log format. By default outputs json.
  • QE_LOG_LEVEL=(info|debug|trace) sets the log level for the Query Engine. If you need Query Graph debugging logs, set it to "trace"
  • FMT_SQL=1 enables logging formatted SQL queries
  • PRISMA_DML_PATH=[path_to_datamodel_file] should point to the datamodel file location. This or PRISMA_DML is required for the Query Engine to run.
  • PRISMA_DML=[base64_encoded_datamodel] an alternative way to provide a datamodel for the server.
  • RUST_BACKTRACE=(0|1) if set to 1, the error backtraces will be printed to the STDERR.
  • LOG_QUERIES=[anything] if set, the SQL queries will be written to the INFO log. Needs the right log level enabled to be seen from the terminal.
  • RUST_LOG=[filter] sets the filter for the logger. Can be either trace, debug, info, warning or error, that will output ALL logs from every crate from that level. The .envrc in this repo shows how to log different parts of the system in a more granular way.

Starting the Query Engine:

The engine can be started either with using the cargo build tool, or pre-building a binary and running it directly. If using cargo, replace whatever command that starts with ./query-engine with cargo run --bin query-engine --.

You can also pass --help to find out more options to run the engine.

Metrics

Running make show-metrics will start Prometheus and Grafana with a default metrics dashboard. Prometheus will scrape the /metrics endpoint to collect the engine's metrics

Navigate to http://localhost:3000 to view the Grafana dashboard.

Schema Engine

The Schema Engine does a couple of things:

  • creates new migrations by comparing the prisma file with the current state of the database, in order to bring the database in sync with the prisma file
  • run these migrations and keeps track of which migrations have been executed
  • (re-)generate a prisma schema file starting from a live database

The engine uses:

  • the prisma files, as the source of truth
  • the database it connects to, for diffing and running migrations, as well as keeping track of migrations in the _prisma_migrations table
  • the prisma/migrations directory which acts as a database of existing migrations

Prisma format

Prisma format can format prisma schema files. It also comes as a WASM module via a node package. You can read more here.

Debugging

When trying to debug code, here's a few things that might be useful:

  • use the language server; being able to go to definition and reason about code can make things a lot easier,
  • add dbg!() statements to validate code paths, inspect variables, etc.,
  • you can control the amount of logs you see, and where they come from using the RUST_LOG environment variable; see the documentation,
  • you can use the test-cli to test migration and introspection without having to go through the prisma npm package.

Testing

There are two test suites for the engines: Unit tests and integration tests.

  • Unit tests: They test internal functionality of individual crates and components.

    You can find them across the whole codebase, usually in ./tests folders at the root of modules. These tests can be executed via cargo test. Note that some of them will require the TEST_DATABASE_URL enviornment variable set up.

  • Integration tests: They run GraphQL queries against isolated instances of the Query Engine and asserts that the responses are correct.

    You can find them at ./query-engine/connector-test-kit-rs.

Set up & run tests:

Prerequisites:

  • Installed Rust toolchain.
  • Installed Docker.
  • Installed direnv, then direnv allow on the repository root.
    • Alternatively: Load the defined environment in ./.envrc manually in your shell.

Setup:

There are helper make commands to set up a test environment for a specific database connector you want to test. The commands set up a container (if needed) and write the .test_config file, which is picked up by the integration tests:

  • make dev-mysql: MySQL 5.7
  • make dev-mysql8: MySQL 8
  • make dev-postgres: PostgreSQL 10
  • make dev-sqlite: SQLite
  • make dev-mongodb_5: MongoDB 5

*On windows: If not using WSL, make is not available and you should just see what your command does and do it manually. Basically this means editing the .test_config file and starting the needed Docker containers.

To actually get the tests working, read the contents of .envrc. Then Edit environment variables for your account from Windows settings, and add at least the correct values for the following variables:

  • WORKSPACE_ROOT should point to the root directory of prisma-engines project.
  • PRISMA_BINARY_PATH is usually %WORKSPACE_ROOT%\target\release\query-engine.exe.
  • SCHEMA_ENGINE_BINARY_PATH should be %WORKSPACE_ROOT%\target\release\schema-engine.exe.

Other variables may or may not be useful.

Run:

Run cargo test in the repository root.

Testing driver adapters

Please refer to the Testing driver adapters section in the connector-test-kit-rs README.

โ„น๏ธ Important note on developing features that require changes to the both the query engine, and driver adapters code

As explained in Testing driver adapters, running DRIVER_ADAPTER=$adapter make qe-test will ensure you have prisma checked out in your filesystem in the same directory as prisma-engines. This is needed because the driver adapters code is symlinked in prisma-engines.

When working on a feature or bugfix spanning adapters code and query-engine code, you will need to open sibling PRs in prisma/prisma and prisma/prisma-engines respectively. Locally, each time you run DRIVER_ADAPTER=$adapter make test-qe tests will run using the driver adapters built from the source code in the working copy of prisma/prisma. All good.

In CI, tho', we need to denote which branch of prisma/prisma we want to use for tests. In CI, there's no working copy of prisma/prisma before tests run. The CI jobs clones prisma/prisma main branch by default, which doesn't include your local changes. To test in integration, we can tell CI to use the branch of prisma/prisma containing the changes in adapters. To do it, you can use a simple convention in commit messages. Like this:

git commit -m "DRIVER_ADAPTERS_BRANCH=prisma-branch-with-changes-in-adapters [...]"

GitHub actions will then pick up the branch name and use it to clone that branch's code of prisma/prisma, and build the driver adapters code from there.

When it's time to merge the sibling PRs, you'll need to merge the prisma/prisma PR first, so when merging the engines PR you have the code of the adapters ready in prisma/prisma main branch.

Testing engines in prisma/prisma

You can trigger releases from this repository to npm that can be used for testing the engines in prisma/prisma either automatically or manually:

Automated integration releases from this repository to npm

(Since July 2022). Any branch name starting with integration/ will, first, run the full test suite in Buildkite [Test] Prisma Engines and, second, if passing, run the publish pipeline (build and upload engines to S3 & R2)

The journey through the pipeline is the same as a commit on the main branch.

  • It will trigger prisma/engines-wrapper and publish a new @prisma/engines-version npm package but on the integration tag.
  • Which triggers prisma/prisma to create a chore(Automated Integration PR): [...] PR with a branch name also starting with integration/
  • Since in prisma/prisma we also trigger the publish pipeline when a branch name starts with integration/, this will publish all prisma/prisma monorepo packages to npm on the integration tag.
  • Our ecosystem-tests tests will automatically pick up this new version and run tests, results will show in GitHub Actions

This end to end will take minimum ~1h20 to complete, but is completely automated ๐Ÿค–

Notes:

  • in prisma/prisma repository, we do not run tests for integration/ branches, it is much faster and also means that there is no risk of tests failing (e.g. flaky tests, snapshots) that would stop the publishing process.
  • in prisma/prisma-engines the Buildkite test pipeline must first pass, then the engines will be built and uploaded to our storage via the Buildkite release pipeline. These 2 pipelines can fail for different reasons, it's recommended to keep an eye on them (check notifications in Slack) and restart jobs as needed. Finally, it will trigger prisma/engines-wrapper.

Manual integration releases from this repository to npm

Additionally to the automated integration release for integration/ branches, you can also trigger a publish manually in the Buildkite [Test] Prisma Engines job if that succeeds for any branch name. Click "๐Ÿš€ Publish binaries" at the bottom of the test list to unlock the publishing step. When all the jobs in [Release] Prisma Engines succeed, you also have to unlock the next step by clicking "๐Ÿš€ Publish client". This will then trigger the same journey as described above.

Parallel rust-analyzer builds

When rust-analzyer runs cargo check it will lock the build directory and stop any cargo commands from running until it has completed. This makes the build process feel a lot longer. It is possible to avoid this by setting a different build path for rust-analyzer. To avoid this. Open VSCode settings and search for Check on Save: Extra Args. Look for the Rust-analyzer โ€บ Check On Save: Extra Args settings and add a new directory for rust-analyzer. Something like:

--target-dir:/tmp/rust-analyzer-check

Community PRs: create a local branch for a branch coming from a fork

To trigger an Automated integration releases from this repository to npm or Manual integration releases from this repository to npm branches of forks need to be pulled into this repository so the Buildkite job is triggered. You can use these GitHub and git CLI commands to achieve that easily:

gh pr checkout 4375
git checkout -b integration/sql-nested-transactions
git push --set-upstream origin integration/sql-nested-transactions

If there is a need to re-create this branch because it has been updated, deleting it and re-creating will make sure the content is identical and avoid any conflicts.

git branch --delete integration/sql-nested-transactions
gh pr checkout 4375
git checkout -b integration/sql-nested-transactions
git push --set-upstream origin integration/sql-nested-transactions --force

Security

If you have a security issue to report, please contact us at [email protected]

prisma-engines's People

Contributors

aknuds1 avatar aqrln avatar carmenberndt avatar cprieto avatar danstarns avatar dependabot[bot] avatar do4gr avatar dpetrick avatar druue avatar ejoebstl avatar ever0de avatar eviefp avatar garrensmith avatar janpio avatar jkomyno avatar jolg42 avatar laplab avatar mavilein avatar miguelff avatar millsp avatar otan avatar pimeys avatar renovate[bot] avatar sevinf avatar spacekookie avatar sytten avatar timsuchanek avatar tomhoule avatar weakky avatar yoshuawuyts avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prisma-engines's Issues

EPIC: Embeds

Spec: The relevant part of the spec on the Prisma Schema Language can be found here.

Description: The Prisma Schema Language allows the user to define named or unnamed embeds.

Example:
The following example shows a User model where the associated account is an embed.

model User {
  id        String @default(cuid())
  account Account
}
embed Account {
  accountNumber Int
}

Links to actual Work Packages

  • not created yet because it is unclear when we will do this.

Implement conversion from introspected schema to data model

Implement conversion of schema returned by database-introspection to data model.

  • Implement field uniqueness
  • Implement field id_info
  • Implement foreign keys
  • Implement sequences
  • Implement field scalar_list_strategy
  • Implement field defaults
  • Implement enums

Refactoring Write Execution of Query Engine

Goals:

  • Enable to work with required relations in all cases (prisma/migrate#98)
  • Simplify the AST for Writing to the database significantly.
  • This will also enable us to move a large portion code out of the connectors and into the core. Most notably the code for handling required relations needs to be duplicated per connector in the current architecture.
  • Query Execution will move in a direction where executing a query becomes akin to execute a program in a LISP like programming language.

Steps:

  • Concrete steps unclear right now. We must do some exploration work to come up with a some concrete steps.

Query Engine: Automated Benchmarks

This is a followup task from this issue.

Idea: We should instrument prisma server to measure every single request and have a prometheus/grafana setup for better timescale graphs.

Fix DeadlockSpec

The DeadlockSpec in the connector test kit suddenly started failing. The message i see in the logs in the case of Postgres is:

Error querying the database: db error: FATAL: sorry, too many clients already, application: prisma

It might be a side effect of the latest changes to connection pooling maybe. We should fix this spec.

EPIC: Benchmarking 1/x

Description: We want to convert the benchmarking suite that we used for Prisma 1 to run against Prisma 2 to know where we are performance wise.

Spec:

  • Convert the existing benchmarking tooling to run against Prisma 2.
  • Explore whether we can store the results in a timescaledb
  • Explore whether Grafana is a good fit to visualize test results.
  • Make it convenient to profile applications locally, by e.g. integrating flamegraphs.
  • Create small reports on our findings, e.g.: Where are we worse and where are we better? Can we pinpoint the underlying issues?
  • Create an automated setup so that we can see the change of benchmarks over time. E.g. run them daily or on each commit or ...

EPIC: Multi field id criterion (Multi column PKs)

Spec: The relevant part of the spec on the Prisma Schema Language can be found here.

Description: The Prisma Schema Language allows the user to define a combination of fields as the id criterion. It is an implementation detail of a connector how this is implemented.

Example:
The following example shows a User model where the combination of firstName and lastName is the id criterion.

model User {
  firstName String
  lastName  String
  @@id([firstName, lastName])
}

Affected Components

  • Datamodel Parser
  • Migration Engine
  • Query Engine
  • Introspection Engine

Links to actual Work Packages

Discussion: Backwards-compatibility of migrations table/files contents

We are approaching a point where we will need strong backwards compatibility of the persisted state of the migration engine.

To avoid breaking backwards compatibility of the migrations, we can write end to end tests that do:

  • snapshot testing of RPC inputs/outputs (json)
  • snapshot testing of migrations table (json?)

For the snapshots, we could use the insta crate.

Every time a snapshot breaks, we must manually change the test suite so it tests with the new snapshot in addition to the old (ideally, this would just be adding a file name to a static list). That way, we test that old migrations keep producing the same results.

It's unambiguous that the contents of the migration table must be backwards-compatible, but I am less sure for RPC. We should probably isolate the results that get persisted to the migrations folder, since we can evolve the API in sync with the Lift CLI.

Related issue: prisma/migrate#151

@mavilein @timsuchanek

EPIC: Id Strategies + Sequences

Spec: There's no mention in the Prisma Schema Language spec yet.

Description: Currently we map all fields of type Int @id always to an int column that is backed by a sequence. Currently it is not possible to have an int column as primary key that is not backed by a sequence.

Example:
The following example currently always maps to an int id field that is backed by a sequence.

model User {
  id Int @id
}

Links to actual Work Packages

  • not created yet because it is unclear when we will do this.

Build OS-based binaries instead of platform-based

Currently, the rust binaries are built for specific platforms like netlify, zeit/now, etc.

Instead, we should build OS-based distributions with different openssl versions, because different operating systems ship with different default openssl versions.

We need:

  • debian-based builds with libssl 1.0.x and 1.1.x
    This will support all major debian versions, all major ubuntu versions, probably most or ever all debian and ubuntu based distributions (e.g. Linux Mint), and cloud platforms which run on Ubuntu like Netlify
  • centos builds with libssl 1.0.x and 1.1.x,

This will add proper support for:

  • CentOS 7
  • Debian 8, 9, 10
  • Ubuntu 14.04, 16.04, 18.04, 19.04
  • debian and ubuntu based distributions
  • Fedora 28-30
  • Most cloud platforms like zeit/now, netlify, lambda

Note: When these binares are shipped, we need to adapt the fetch-engine in photonjs. See #TODO for tracking.

When done, we can remove:

  • platform-specific builds like zeit, netlify, lambda, etc. when they work out of the box with the new OS-based builds
  • ubuntu16.04 because it was only built as a workaround

For a general overview, see prisma/prisma#533

Query Engine: Find a replacement for our current logger (performance)

This is a followup task from this issue.

During benchmarking we found out that our current logger implementation is very slow and in some cases can consume 50% of CPU cycles while processing a simple query. We want to replace the current logger we use. The idea is to evluate tokio tracing for which we need write our own json output formatter.

EPIC: Prisma Core Primitive Types

Spec: The relevant part of the spec on the Prisma Schema Language can be found here.

Description: The PSL defines several core data types that must be implemented by each connector. We must make sure that we support exactly those not less or more.

Example:
The following example shows a User model where the combination of firstName and lastName is unique.

model User {
  id        String @default(cuid())
  string String
  bool Boolean
  int Int
  float Float
  dateTime DateTime
}

Links to actual Work Packages

  • not created yet

Build a prototype to compare performance of synchronous and asynchronous database drivers

Goal: We want to compare the performance of synchronous and asynchronous database drivers.

Idea: Build a very simple HTTP server that reads from a Postgres database. Through an interface it should be easy to swap the database driver that is being used.

Requirements:

  • easily extendable: it must be easy to add new queries to the prototype
    • idea: the server exposes a generic HTTP route for GET /{path} where the path is the name of the query to be executed. The server holds a map of static queries mapped to their name.
  • Queries: We can start with some really simple queries that do not require a special database. We should start testing with two simple queries against the information_schema. One query that returns one result and one that returns many results.
  • benchmark with the Vegeta benchmarking tool
  • Build two implementations of the database interface: One based on the sync postgres driver that uses a thread pool to asyncify request processing. Another based on the async postgres driver.

Interface Draft:

pub trait Testee {
    // executes the query with the given name. The result is serialized to JSON and returned.
    fn query(&self, query_name: &str) -> Future<Output = serde_json::Value>;
}

Query Engine: Native Types

Check the related epic for an initial breakdown of work packages.

Please use supply a more detailed task list in a comment.

Migration Engine: Implement new diffing approach to enable custom types

The Migration Engine needs to be able to diff two data models in order to generate the steps.json that is central to lift. Currently we diff the data structures in the dml module of the datamodel crate. I think we should instead diff the data structures of the ast module instead.
This should make our code easier to maintain and at the same time enables diffing of arbitrary directives. This is required to enbale custom types.

EPIC: Native Types

Product Epic: The corresponding product epic can be found here.

Description: The Prisma Schema Language allows the user to define custom types that should be used for a given field.

Example:
The following example shows a User model where the combination of firstName and is backed by a varchar column.

datasource pg1 { 
  .. 
}

model User {
  id        String @default(cuid())
  firstName String @pg1.varchar(123)
  lastName  String
}

Links to actual Work Packages

Test introspection

Test introspection versus old system.

  1. npm install -g prisma2@alpha
  2. Create a folder and cd into it.
  3. run prisma2 init. This starts a dialog. In the following steps choose Sqlite, Lift, Typescript and From Scratch .
  4. Then run prisma2 lift save --name test && prisma2 lift up to migrate the database
  5. Introspect(?)

Buildkite: set up MySQL 8 container for tests

There were fixes to the sql-schema-describer after users submitted issues about failing migrations (#47). We want to make sure prisma 2 works on both MySQL 8 and older versions, so that PR sets up the Rust tests to run on both versions using docker-compose locally (docker-compose.yml).

We want to have the same MySQL 8 setup on CI. It's identical to the mysql 5.7 setup, except the host port (3307), the container name and the service name (both mysql8).

This issue is for tracking the CI setup work.

Related issue: #49

EPIC: new Introspection Engine

The introspection connector for SQL is the component that introspects a SQL database and returns a Prisma schema. This gets implemented by our Freelancer Arve.

Work packages are:

  • build the command driven by Unit Tests based on the idea to simply invert the behavior of the DatabaseSchemaCalculator in the migration engine.
  • build a binary that takes a Prisma connection url and introspects the database. The result is printed to stdout. This allows to try out the introspection internally.
  • add a JSON RPC API to the introspection engine
  • create integration tests to test complex schemas in an end to end fashion. Foundation for those tests can either be our collection of database schema examples or the tests in the typescript version of this command.

Replace SQL command string formatting

Replace SQL command string formatting with safe parameter injection in introspection component.

This is a note to myself, but I can't assign issues to myself currently.

Query Engine: Asyncify

This is a followup task from this issue.

Idea: Threaded connection pool with poll_fn/blocking, async/await all the way up.

Cascading Deletes - Implementation Strategy for SQL connector

Goal: This issue describes how we plan to implement the feature of cascading deletes available in the Prisma Schema Language.

Idea: We would like to leverage SQLs ON DELETE CASCADE feature wherever possible. However it does not support all the usecases of Prismas cascading deletes. The idea is that the SQL feature is used as often as possible and shims are implemented where necessary in the query engine. The parts that need to be shimmed are indicate with a ๐Ÿšจ below.

Problems:

  • Contrary to intuition a @relation(onDelete: CASCADE) on a field implies a SQL level ON DELETE CASCADE on the column of the related field. ๐Ÿ’ฅ
  • Prisma provides a field for both sides of a relation that can take a onDelete annotation. On the SQL level there is only one column, and therefore only the behavior for one side can be expressed on the SQL level.
  • For Many to Many relations we cannot express the behavior we want (deletion traversing through the join table) on the SQL level.

Analysis of where the SQL On Delete Cascasde works and where not

One To Many: Cascade from the Parent.

This could work purely on the SQL level and could also be introspected from the DDL.

Prisma schema:

model Parent {
  children Child[] @relation(onDelete: CASCADE)
}

model Child {
  parent Parent // this references the parent id
}

corresponding SQL:

Create Table Parent ( id Text );
Create Table Child ( id Text, 
                                  parent text REFERENCES Parent(id) ON DELETE CASCADE );

Semantics:

  • delete parent: children get deleted โœ…
  • delete child: parent remains โœ…

One To Many: Cascade from the Child.

This cannot be expressed on the SQL level and would need to be handled by Prisma. We could also not introspect this case.

Prisma schema:

model Parent {
  children Child[] 
}

model Child {
  parent Parent @relation(onDelete: CASCADE) // this references the parent id
}

corresponding SQL:

Create Table Parent ( id Text );
Create Table Child ( id Text, 
                                  parent text REFERENCES Parent(id));

Semantics:

  • delete parent: children remain โœ…
  • delete child: parent gets deleted ๐Ÿšจ

One To Many: Cascade from both Sides

This cannot be fully expressed on the SQL level. We would need a mix of Prisma level and SQL level handling. We could therefore also not introspect this case.

Prisma schema:

model Parent {
  children Child[] @relation(onDelete: CASCADE)
}

model Child {
  parent Parent @relation(onDelete: CASCADE) // this references the parent id
}

corresponding SQL:

Create Table Parent ( id Text );
Create Table Child ( id Text, 
                                  parent text REFERENCES Parent(id)ON DELETE CASCADE);

Semantics:

  • delete parent: children remain โœ…
  • delete child: parent gets deleted ๐Ÿšจ

Many to Many

Here the SQL level ON DELETE CASCADE statements merely ensure that there are no dangling relation entries after one of the connecting nodes are deleted. They have no connection to the Prisma level semantics. The Prisma level ones cannot be expressed in the db and therefore can also not be introspected. Here for all cases (CASCADE on child, CASCADE on parent, CASCADE on both) the implementation needs to happen on the Prisma level.

Prisma schema:

model Parent {
  children Child[] @relation(onDelete: CASCADE)
}

model Child {
  parents Parent[]
}

corresponding SQL:

Create Table Parent ( id Text );
Create Table Child ( id Text );
Create Table _ParentToChild (
  parent Text REFERENCES Parent(id) ON DELETE CASCADE,
  child Text REFERENCES Child(id) ON DELETE CASCADE
);

Semantics:

  • delete parent: children get deleted ๐Ÿšจ
  • delete child: parents remain โœ…

EPIC: Multi field unique

Spec: The relevant part of the spec on the Prisma Schema Language can be found here.

Description: The Prisma Schema Language allows the user to define a combination of fields that should be unique across all records for that model with the @@unique directive. It is an implementation detail of a connector how this is implemented. Usually a connector will leverage an index with unique constraint to implement this.

Example:
The following example shows a User model where the combination of firstName and lastName is unique.

model User {
  id        String @default(cuid())
  firstName String
  lastName  String
  @@unique([firstName, lastName], name: "UniqueFirstAndLastNameIndex")
}

Links to actual Work Packages

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.