Git Product home page Git Product logo

elementary-data / elementary Goto Github PK

View Code? Open in Web Editor NEW
1.7K 10.0 144.0 197.09 MB

The dbt-native data observability solution for data & analytics engineers. Monitor your data pipelines in minutes. Available as self-hosted or cloud service with premium features.

Home Page: https://www.elementary-data.com/

License: Apache License 2.0

Python 22.28% HTML 77.70% Dockerfile 0.02%
data-lineage data-governance data-warehouse snowflake bigquery data-analysis data-pipelines data-pipeline lineage data-reliability

elementary's People

Contributors

aaron-westlake avatar arun-kc avatar aylr avatar dapollak avatar ellakz avatar elongl avatar erikzaadi avatar hadarsagiv avatar hahnbeelee avatar handotdev avatar haritamar avatar idoneshaveit avatar ivan-toriya avatar kovaacs avatar maayan-s avatar manulpatel avatar nic3guy avatar nimrodne avatar noakurman avatar noyaarie avatar ofek1weiss avatar oravi avatar oren-elementary avatar roitabach avatar seanglynn-thrive avatar shahafa avatar svdimchenko avatar vishaalkk avatar web-flow avatar yu-iskw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elementary's Issues

[Feature] Enable to specify destination database and schema for elementary

Motivation

The destination database and schema for elementary is settled by the first node whose package_name is elementary. If we have multiple databases and schemas with dbt tests by elementary in a dbt project, the destination can vary. So, it would be great to specify the destination database and schema.

Possible behavior

Default

If destination database and schema are not specified, the detination database can be the database of a profile and the destination schema can be elementary by default.

Specified

If the destination database for elementary is specified with something like elementary_artifacts_database as a dbt variable, it is set. And if the destination schema is specified with something like elementary_artifacts_schema as a dbt variable, it is used.

Improve user flow in case graph is empty

In case there were no exceptions but no valid queries for generating the graph, the HTML is created empty.
This is probably due to permissions issues or miss configurations.
Instead of generating the empty graph, we should present the user with instructions on what went wrong and what to do to get lineage.

Support custom anomaly threshold

Task Overview

  • Currently all anomalies are calculated statistically with a fixed global threshold (it's configurable with a var called anomaly_score_threshold, its default is 3). In some use cases it's better to define a more / less sensitive threshold based on the underlying dataset. Currently all the monitors are implemented as dbt tests and therefore the ideal solution would be to provide an additional test parameter that could receive this custom anomaly threshold for a specific test. If this parameter was not provided to the test, the the default should remain as it's configured with the global var 'anomaly_score_threshold'.

Design

  • There are three main files where the test macros are implemented - test_table_anomalies.sql, test_column_anomalies.sql and test_all_columns_anomalies.sql (please note that currently there is some code duplication in these files and in the future we will probably fix it).
  • All of these test macros should receive a new parameter (defined at the end) with a default value 'none', called 'anomaly_threshold'.
  • Then each test should pass this value to the macro 'get_anomaly_query'
  • If the received value of this param is none, the code in 'get_anomaly_query' macro should use the macro elementary.get_config_var to get the global var 'anomaly_score_threshold' and use its value instead (this is the behavior today).
  • Then the anomaly query should use this param (or the global value of the var anomaly_score_threshold) to determine if there is an anomaly or not (look for 'where abs(anomaly_score)')

[ELE-33] [Feature] Support Clickhouse

(Feel free to close if this isn't helpful :) )

We (Superwall S21) have been looking for a tool like this to help us monitor our data pipelines. We help customers understand the performance of changes to monetization campaigns in apps so it is super important we know when something is broken. Right now we have dashboards in Grafana that help us see overall counts but have literally been caught by one of the examples you called out in our docs, an increased null rate. This would have saved up soooo much time.

Our stack looks like SDK -> NodeJS API -> Kafka -> Clickhouse right now and we're looking for better monitoring tooling to let us know when something is broken.

ELE-33

[Feature] Enrich graph with dbt run and test results

Feature description

  • Today many data teams run their data transformations with dbt. When you run dbt either in production or locally, dbt produces artifacts that contain useful information about your views and tables like relevant test results, freshness issues, update times and metadata about your data (to learn more about dbt's artifacts go here).

  • We could add an option to provide the tool with your dbt's project artifacts directory (usually it's the target directory in your dbt project directory but is also configurable) and it will automatically parse these artifacts and enrich the data lineage graph with your latest dbt run and test results.

Feature output

Feedback

We would love to hear any comments or feedback about this feature.

Column size issue in 'all_columns_anomalies' test

Issue description:
On the test 'all_columns_anomalies' the results table is created using the first batch of results (the first column), and the results column sizes are assigned by the content of this batch.
If there is a following batch with larger values, the insert of the batch fails.

Solution:
Create the table with an empty template (like we do with the incremental tables).

The fix will be deployed as part of the Redshift integration branch:
elementary-data/dbt-data-reliability#20

[Feature] DWH Insights - Reports for operational use cases

As Elementary has access to the logs of the DWH, we can analyze and create reports that go beyond lineage and health tracking.

  • Use cases we plan to support:

    • Assets importance - We can score datasets based on the number of dependencies, read queries, users, etc. This can help identify important datasets.

    • Usage visibility:

      • Updates activity - Provide info about the frequency of updates of datasets, including large gaps (probable SLA breaches), trends, etc.
      • Usage activity - Provide info about the usage (read queries) frequency of datasets, including users, trends, etc.
    • Cost and performance optimization:

      • Cleanup recommendations - Report on datasets that are not used, to reduce storage costs and operational overhead.
      • Jobs performance - Provide info about the performance of repeating queries. This is useful to identify deteriorating queries and changes in resources consumption, so teams could prioritize development efforts and optimize operations.
  • As a first step, the insights can be provided as CSV / JSON files or written into tables in the DWH.

  • In the future, we can add reports to the UI and create automated workflows, based on feedback and usage.

Feedback

We would love to hear any feedback / comments / requests about this feature!
Specifically what use cases are valuable to you, and if there are others you would want us to address.

[ELE-37] Detection of deleted columns in Bigquery

Elementary pulls the data to detect schema changes from the information schema columns view.
This view is updated in a delay (unclear how long), so the alert on deleted columns is delayed as well.

Possible solution
Research if there is an alternative source that updates in real time.

ELE-37

Make the artifacts uploader more granular

Today the artifacts uploader includes several types of artifacts.
As it runs at the 'on run end' of each run, this might be an overhead for the users.
If we make it more granular we can let them choose what artifacts they are interested in.

Multiple profiles for alerting

Our current configuration for alerting using the CLI is to configure a profile named 'elementary' and provide it the schema where 'data_monitoring_metrics' is.
This means that there can only be one profile for alerting.

The need here is to enable monitoring of two different schemas that are managed separately on the same db.

Hemant - could you confirm that this describes the need well?

SingleStore integration

I’d like to request integration with SingleStore. It’s MySQL and MariaDB wire protocol-compatible so there’s wide availability of client drivers across languages.

Pass timestamp_column as a test param

Task Overview

  • Currently timestamp_column is the only configuration that is needed to be configured globally in the model config section (usually it's being configured in the properties.yml under elementary in the config tag).
  • Passing the timestamp_column as a test param will enable running multiple tests with different timestamp columns. For example running a test with updated_at column which represents the update time of the row or running a test with event_time which represents the time the event was sent.

Design

  • There are three main files where the test macros are implemented - test_table_anomalies.sql, test_column_anomalies.sql and test_all_columns_anomalies.sql (please note that currently there is some code duplication in these files and in the future we will probably fix it).

  • All of these test macros should receive a new parameter (defined at the end) with a default value 'none', called 'timestamp_column'.

  • In each test currently there are two lines of code which are responsible for extracting the timestamp_column from the global model config
    {%- set table_config = elementary.get_table_config_from_graph(model) %}
    {%- set timestamp_column = elementary.insensitive_get_dict_value(table_config, 'timestamp_column') %}

  • The macro 'get_table_config_from_graph' returns the timestamp_column and its normalized data type (called 'timestamp_column_data_type')

  • The following code in the macro 'get_table_config_from_graph' that is responsible for finding the timestamp column data type should be extracted to a macro called find_normalized_data_type_for_column -
    {% set columns_from_relation = adapter.get_columns_in_relation(model_relation) %} {% if columns_from_relation and columns_from_relation is iterable %} {% for column_obj in columns_from_relation %} {% if column_obj.column | lower == timestamp_column | lower %} {% set timestamp_column_data_type = elementary.normalize_data_type(column_obj.dtype) %}

  • Then in the test itself if the received timestamp_column new param is not none, use this extracted macro to find the column normalized data type and pass this timestamp_column and timestamp_column_data_type to the relevant functions (get_is_column_timestamp, column_monitoring_query, table_monitoring_query).

  • If the timestamp_column is none, use the global timestamp column as it is implemented today

Export lineage relationships in a file

Some data discovery platforms like Amundsen have options to visualize data lineage relationships at a glance on the moment that you are exploring data. But you need to bring your own lineage metadata, Im really interested in an option to export the lineage relations in a file (csv or json) so I can pass that to the amundsen extractor. In the case of the basic lineage extractor of amundsen uses a single .CSV file with the structure: source_table , target_table
If this file is a JSON , additional info can put it in there. So we could filter per type: CREATE_VIEW or CREATE_TABLE operations

[ELE-36] Athena integration

This is a new type of integration that was requested in the Slack community.

From a quick look it seems like dbt already supports Athena and it seems like most of the features are supported.
The monitoring is implemented as dbt tests and therefore we will need to run the package and its tests on an env with Athena to see if the tests are working as expected on this platform

From SyncLinear.com | ELE-36

[Feature] Store elementary results in a single schema

At the time of writing this with elementary 0.4.1, elementary persists results per dbt's schema. But, personally, it can get messy, as I have a lot of dbt's schema, that is, BigQuery datasets. Let's consider if we have 60 BigQuery datasets accross 5 Google Cloud projects in a dbt project. The same number of new BigQuery datasets for elementary are created. It looks messy for me. I would like to bundle them in a single BigQuery dataset to keep it clean. Moreover, it would be nice to specify a single destination schema in a database, that is, a single destination BigQuery dataset in a GCP project.

[BigQuery] Syntax error: Illegal escape sequence

Hi,
I'm testing your dbt package for our data warehouse which is placed on Google BigQuery
The generated sql scripts are producing erros. One big error is coming from the on-run-end hook:

Database Error
  Syntax error: Illegal escape sequence: \E at [6:566]

I've checked the generated SQL code and it seems that all backslashes coming from the model names are producing an error e.g. this part:
...'models\business_vault\EXCHANGERATESAPI\msrglB\currency_datedexchangerates_xrio_brs.sql','business_vault\EXCHANGERATESAPI\msrglB\currency_datedexchangerates_xrio_brs.sql','2022-04-05 11:04:47')

I think there is some escaping missing.

[ELE-35] Tests configuration changes

The key for choosing specific tests today has the same name as the test.
We got feedback that this is confusing and inconsistent between tests.

# Current format:

        - elementary.all_columns_anomalies:
            all_columns_anomalies:
              - null_count

# Suggested format:

        - elementary.all_columns_anomalies:
            monitors:
              - null_count

Also - we should accept the param 'all' as another option to activate all the monitors, as this is more intuitive than all by default.

ELE-35

Full-refresh for test metrics

In data anomalies tests we collect metrics for 14 days back by default (configurable as 'days_back').
For performance reasons, if elementary already has data for some of the days, it won't recalculate the metrics.
However, in some cases, users may want to recalculate.
Also if there was a full-refresh to an incremental table, we should probably recalculate by default.

BigQuery cross region lineage

Overview -

  • Today the location defined in profiles.yml is being used when querying the information schema (BigQuery's information schema view name contains the region)
  • If the same project contains different datasets in different locations, the lineage will be calculated based on queries in the configured region only
  • The requested enhancement here is to get a list of configured regions and to use queries from all regions when building the lineage graph

[Feature] DWH costs visibility reports

As Elementary has access to the logs of the DWH, we can analyze and create reports that provide visibility into the costs of the DWH.

Use cases we can support:

  • Costs per user
  • Costs per dataset (table / schema / warehouse), including leveraging our knowledge about dependencies to provide the real cost
  • Trends - Datasets and queries that are becoming more expensive over time

As a first step, the insights can be provided as CSV / JSON files or written into tables in the DWH.
In the future, we can add reports to the UI and create automated workflows, based on feedback and usage.

Feedback

We would love to hear any feedback / comments / requests about this feature!
Specifically what use cases are valuable to you, and if there are others you would want us to address.

[Feature] End to end lineage graph - include BI and ETL tools

In order to make the graph more informative and useful, we want to present the BI and ETL tools on the lineage graph.
The relevant tools are upstream (data sources) and downstream (consumers/destinations) from the DWH.

As we build lineage from the DWH available logs, and we don't want additional integrations at the moment, the idea is:
The user will create a configuration file (YAML) that will associate service users to tools

For sources (Fivetran, AirByte, etc.) - The service users the tools use to load data to the DWH.
For targets ( Looker, Tableau, etc.) - The service users the tools use to pull data from the DWH.

Example YAML format:

sources:
    airbyte:
    	user_name: airbyte
    	role: airbyte_service

destinations:
    tableau:
    	user_name:tableau
    	role:tableau_service

Feature output demo:


New monitor: column values distribution

The feature is a new monitor for column anomalies detection.

Monitor goal:
Detecting a change in the distribution of the different values in a column.

Example:
An example would be an orders table which represents orders placed across multiple stores. There's a column in the table which represents the store the order was placed in. We'd then want to have a test fail if the count of orders by day for a store dropped significantly below the typical value for that store.
Something along the lines of:

select column_name, count(*)
from table
group by column_name

Possible implementations:
Package:

  1. Add new CTE in the current column_monitoring_query
  2. Create a new query for this test, and add it to the flow of test_column_anomalies
    Anomaly detection does not need to change to support this.

CLI:
We need to think about how to present it in the UI.
Currently, we have a metric graph for each monitor+column. This test will output a metric per value+monitor+column.

Move schema and/or database definition from profiles.yml to CLI

Today the database and schema for the lineage graph are defined in a yml file.
These are used for both the connection and the filtering of the graph/queries.
The change is to get these as input in the CLI.

Pros:
Would enable easier workflows creation for different datasets using the same configuration file.

Cons:
Change from how these are used in dbt, whic is familiar to the users.

Artifacts uploader fails on Bigquery if reaches query size limit

The query on the artifacts uploader on-run-end hook failed with the following error:

Database error while running on-run-end
Encountered an error:
Database Error
  The query is too large. The maximum standard SQL query length is 1024.00K characters, including comments and white space characters.
 

This probably happens on very large dbt projects.

Snowflake special characters in table names

Snowflake syntax includes use of special characters in table names in some cases, and currently we don't support it.

Example:
select metadata$filename, metadata$file_row_number, t.$1, t.$2 from @mystage1

Add filtering options for the data monitoring tests

Currently, we use the timestamp column as a filter, or no filtering at all (run on the entire table).
For some use cases, this is not enough.

Use cases:

  • Snapshot tables - timestamp is not relevant, you want to filter on rows where 'valid_to' is null
  • Big tables with no timestamp column - order by + limit?

dbt supports where conditions using this macro:
https://github.com/dbt-labs/dbt-core/blob/main/core/dbt/include/global_project/macros/materializations/tests/where_subquery.sql
(Documented here: https://docs.getdbt.com/reference/resource-configs/where)

We need to understand if there are use cases where you need both the timestamp and additional filtering?
Should there be a different behaviour for such tests?

Databricks integration

This is a new type of integration that was requested in the Slack community.

  • From a quick look it seems like dbt already supports databricks and it seems like most of the features are supported
  • The monitoring is implemented as dbt tests and therefore we will need to run the package and its tests on a databricks env to see if the tests are working as expected on this platform

Custom 'query_history_source'

Feature request:
Custom 'query_history_source', so users could create VIEW from ACCOUNT_USAGE that has the same schema.
This would enable users to create full lineage without the requirement of ACCOUNT_ADMIN role privileges (better practice from a security perspective).

Requested on Slack, 2021-09-27.

Check if filter table is found before execution starts

When running with a filter, the filtering runs after the graph is calculated.
A check to minimize 'The node is not in the digraph' exception could be added at the beginning of the execution.

Check against information schema:
Pros:

  • Will catch typos and miss-configuration.
  • Fast and easy
    Cons:
  • Will not catch the case where the filtered table exists in the database and schema but not in the queries (time range for example).

Search in the queries:
Pros:

  • Will catch typos, miss-configuration, and table not in selected queries.
    Cons:
  • Might be slow.

Pass days_back and/or backfill_days_per_run as a test param

Task Overview

  • Currently the backfill period for the tests runs is a global config (it's configurable with a var called days_back, its default is 2). However, this should be based on the underlying dataset and the routine of the updates/backfills. Currently, all the monitors are implemented as dbt tests and therefore the ideal solution would be to provide an additional test parameter that could receive this custom 'days_back' for a specific test. If this parameter was not provided to the test, the the default should remain as it's configured with the global var.

Design

  • There are three main files where the test macros are implemented - test_table_anomalies.sql, test_column_anomalies.sql and test_all_columns_anomalies.sql (please note that currently there is some code duplication in these files and in the future we will probably fix it).
  • All of these test macros should receive a new parameter (defined at the end) with a default value 'none', called 'days_back'.
  • Then each test should pass this value to the macro 'get_days_back'
  • If the received value of this param is none, the code in 'get_anomaly_query' macro should use the macro elementary.get_config_var to get the global var 'days_back' and use its value instead (this is the behavior today).
  • Validate that there aren't other places in the code relaying on this var.

Should be similar to this change:
elementary-data/dbt-data-reliability#26

Integrate linage visualization with the report UI

Motivation

The report UI is awesome. When we find failures on a table, we would like to investigate affected downstream tables. So, it would be great to support the lineage feature in the report UI.

Running process got stuck

Hi team,
I tried to run elementary, but it just got stuck after logging into Snowflake.
Nothing has changed on my screen at least for an hour.
image

Do you have any ideas about what could go wrong?

Add "test alerting" on Slack

Slack alerts are only sent if there are failed tests.
When users deploy Elementary, they want to have a way to validate the deployment worked.
We should have a flag for validating the deployment.

Add support for Slack workflows format in alerts

The current format does not support workflows, as it only supports key-value pairs.

{
  "description": <ALERT_DESCRIPTION>,
  "table": <table_name>,
  "detected_at" <detected_at>
}

Possible solution:
Config property where you can set the type of slack integration you are using (either 'workflow' or 'webhook'). If you set the config to 'workflow' the response body would be formatted in the proper way.

Integrate with Snowflake's new ACCESS_HISTORY

Snowflake is about to release a new feature that will enable parsing the query and extracting tables and columns set (probably still need to parse the query to learn the relation between the columns for column level lineage).

Main benefits -

  • Make the tool faster (no need to extract tables lineage using our python parser)
  • We might be able to simplify the setup (we will consider removing some python dependencies as they won't be needed anymore)

Downsides -

  • Requires permissions to account_usage
  • For column level lineage parsing the relation between columns is still needed

Open questions / how does this work with the following -

  • Views
  • Copy into commands
  • Subqueries

Open questions for columns -

  • Operators / Select * / Cases
  • Subqueries
  • Relations

See more details about this upcoming release here -

[ELE-32] [Feature] Column Level Lineage

Feature description

  • Currently the lineage is at the table level which means that if you need to change or monitor a specific column or a set of columns, you should manually search in the relevant tables how each column is transformed and what its current status in each step or table in your data flow.

  • In order to easily understand the upstream & downstream dependencies of specific columns and their live status (volume / freshness for example) we should also parse and extract dependencies between columns as part of the query processing phase.

  • Showing the entire column level lineage for an average environment could be overwhelming, we will probably start by presenting the lineage graph of chosen columns.

  • A new option in the CLI could be added to support selecting a specific column (similar to the table filter option), then using the direction / depth command line options the relevant upstream / downstream (or both) dependencies will be presented in the lineage graph.

Feature output

feature_output

Feedback

We would love to hear any feedback / comments / requests about this feature.

ELE-32

[Question] How can we prepare `table_monitors_config`?

Overview

I tried to monitor dbt tests with jaffle_shopw by following the documentation. But, I was not able to upload artifacts due to the lack of the destination table. If I am correct, we have to create the table table_monitors_config a head. But, the documentation doesn't describes table_monitors_config. How can we prepare the table?

Environments

  • Python 3.8
  • dbt 1.0.3
  • elementary 0.3.2.

Error message

09:22:03  Running 2 on-run-end hooks
09:22:33  1 of 2 START hook: jaffle_shop.on-run-end.0..................................... [RUN]
09:22:33  1 of 2 OK hook: jaffle_shop.on-run-end.0........................................ [OK in 0.00s]
09:22:33  2 of 2 START hook: elementary.on-run-end.0...................................... [RUN]
09:22:33  2 of 2 OK hook: elementary.on-run-end.0......................................... [OK in 0.00s]
09:22:33
09:22:33
09:22:33  Finished running 8 view models, 9 incremental models, 2 table models, 3 seeds, 17 tests, 3 hooks in 44.13s.
09:22:33
09:22:33  Completed with 2 errors and 0 warnings:
09:22:33
09:22:33  Runtime Error in model filtered_information_schema_columns (models/edr/metadata_store/filtered_information_schema_columns.sql)
09:22:33    404 Not found: Table sandbox-project:jaffle_shop_elementary.table_monitors_config was not found in location asia-northeast1
09:22:33
09:22:33    (job ID: d0f16aa6-b33d-493f-b0d1-8b934a090682)
09:22:33
09:22:33  Runtime Error in model filtered_information_schema_tables (models/edr/metadata_store/filtered_information_schema_tables.sql)
09:22:33    404 Not found: Table sandbox-project:jaffle_shop_elementary.table_monitors_config was not found in location asia-northeast1
09:22:33
09:22:33    (job ID: 03fc0b75-262f-49f8-8957-96b55268e9ac)

[ELE-34] Postgres Integration

Hi guys, congrats on YC!

I'm looking to test out elementary, so was running it with good ol' jaffle_shop dbt toy example.
I'm however running into errors after installing elementary deps and "dbt run".
(I'm running with a local postgres db and admin permissions).

I guess it doesn't support postgres just yet?

image
image
dbt.log

ELE-34

[ELE-42] Add support to _PARTITIONTIME as timestamp column in Bigquery

In Bigquery partitioned tables have a pseudo-column named _PARTITIONTIME.
https://cloud.google.com/bigquery/docs/querying-partitioned-tables

Before running the test query, Elementary checks the information schema to see if the column provided as the timestamp column exists or not. As I assume _PATITIONTIME is not there, it ignores it (we don't want to query a missing column and fail).

Requested on Slack by Krzystof D.

ELE-42

SQL compilation error when using column-level anomalies

Hey all,

I added the elementary package to the dbt repository and used dbt run to create all required tables. But when I tried to add column-level anomalies, the dbt run gave me an error as

19:10:32    001003 (42000): SQL compilation error:
19:10:32    syntax error line 7 at position 19 unexpected '""'.
19:10:32    syntax error line 10 at position 15 unexpected ''day''.
19:10:32    syntax error line 10 at position 26 unexpected '('.
19:10:32    syntax error line 10 at position 40 unexpected 'as'.
19:10:32    syntax error line 12 at position 1 unexpected ')'.

The configuration I added to the yml file is as:

  - name: table_name
    config:
      elementary:
        timestamp_column: "_inserted_at"
    tests:
      - elementary.table_anomalies:
          table_anomalies:
            - row_count
            - freshness
    columns:
      - name: "id"
        description: " "
        quote: true
        tests:
          - not_null
          - unique
          - elementary.column_anomalies:
              column_anomalies:
                - missing_count
                - min_length

However, table-level anomalies worked as expected. I tried to look up compiled SQL files from target/compiled and target/run, but couldn't find any models relevant to this problem. Any ideas?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.