Git Product home page Git Product logo

dbt-oracle's Introduction

dbt-oracle

PyPI version dbt-tests-adapter dbt-oracle docs dbt-oracle license

dbt-oracle implements dbt (data build tool) functionalities for Oracle Autonomous Database.

Prior to version 1.0.0, dbt-oracle was created and maintained by Indicium on their GitHub repo. Contributors in this repo are credited for laying the groundwork and maintaining the adapter till version 0.4.3. From version 1.0.0, dbt-oracle is maintained and distributed by Oracle.

Installation

pip install dbt-oracle

Documentation

Please refer to the Oracle setup on dbt docs website for documentation.

Help

Questions can be asked either in db-oracle community Slack channel or in GitHub Discussions

Bugs reports or feature requests can be raised in GitHub Issues

Sample project

To get started, a sample dbt project can be found in the directory /dbt_adbs_test_project.

Contributing

This project welcomes contributions from the community. Before submitting a pull request, please review our contribution guide.

Security

Please consult the security guide for our responsible security vulnerability disclosure process.

License

dbt-oracle is licensed under Apache 2.0 License which you can find here.

dbt-oracle's People

Contributors

algol68 avatar alina-yur avatar aosingh avatar dataders avatar rkope99 avatar spavlusieva avatar thosap avatar younesamin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbt-oracle's Issues

[Bug] Null values in column_id causes error in documentation generation

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

When running dbt docs generate an error is caused when there are Null values in sys.all_tab_columns.column_id.

Error:

[...]
  File "...\lib\site-packages\dbt\task\generate.py", line 100, in add_column
    column_data["index"] = int(column_data["index"])
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'

The column has no not null constraint and can be null sometimes.

Expected Behavior

The documentation should be created even if null values are present.

Steps To Reproduce

No response

Relevant log output using --debug flag enabled

No response

Environment

- OS: Ubuntu 22.04
- Python: 3.10
- dbt: 1.5

What Oracle database version are you using dbt with?

19c

Additional Context

This issue could be solved easily by changing the sql in /include/oracle/macros/catalog.sql:

with columns as (
    [...]
	// column_id ordinal_position,				# old
        coalesce(column_id, 0) ordinal_position,		# new
[...]

upgrade to support dbt-core v1.5.0

Background

The latest version of dbt Core,dbt-core==1.5.0rc1, was published on April 13, 2023 (PyPI | Github).

How to upgrade

dbt-labs/dbt-core#7213 is an open discussion with more detailed information. If you have questions, please put them there!

The above linked guide has more information, but below is a high-level checklist of work that would enable a successful 1.5.0 release of your adapter.

  • Add support Python 3.11 (if you haven't already)
  • Add support for relevant tests (there's a lot of new ones!)
  • Add support model contracts
  • Add support for materialized views (this likely will be bumped to 1.6.0)

the next minor release: 1.6.0

FYI, dbt-core==1.6.0 is expected to be released at the end of July, with a release cut at least two weeks prior.

[Bug] Connection to PDB

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I can connect to a PDB but the generated models are not working due to naming issues before the dot for the create table statements. There are no issues with seeds and sources.

Expected Behavior

Steps To Reproduce

Relevant log output using --debug flag enabled

-

Environment

- OS:Ubuntu
- Python: 3.10
- dbt: newest

What Oracle database version are you using dbt with?

21 XE

Additional Context

[Feature] parallel insert and merge statements

Describe the Feature

I would like to have parallel hints added to insert and merge statements based on assigned parameter.

I've achieved this altering oracle__get_incremental_merge_sql macro:

{# added parallel param and parallel hint on line 18 and 34 #} {% macro oracle__get_incremental_merge_sql(args_dict) %} {%- set parallel = config.get('parallel', none) -%} {%- set dest_columns = args_dict["dest_columns"] -%} {%- set temp_relation = args_dict["temp_relation"] -%} {%- set target_relation = args_dict["target_relation"] -%} {%- set unique_key = args_dict["unique_key"] -%} {%- set dest_column_names = dest_columns | map(attribute='name') | list -%} {%- set dest_cols_csv = get_quoted_column_csv(model, dest_column_names) -%} {%- set merge_update_columns = config.get('merge_update_columns') -%} {%- set merge_exclude_columns = config.get('merge_exclude_columns') -%} {%- set incremental_predicates = args_dict["incremental_predicates"] -%} {%- set update_columns = get_merge_update_columns(merge_update_columns, merge_exclude_columns, dest_columns) -%} {%- if unique_key -%} {%- set unique_key_result = oracle_check_and_quote_unique_key_for_incremental_merge(unique_key, incremental_predicates) -%} {%- set unique_key_list = unique_key_result['unique_key_list'] -%} {%- set unique_key_merge_predicates = unique_key_result['unique_key_merge_predicates'] -%} merge into {% if parallel %} /*+parallel({{ parallel }})*/ {% endif %} {{ target_relation }} DBT_INTERNAL_DEST using {{ temp_relation }} DBT_INTERNAL_SOURCE on ({{ unique_key_merge_predicates | join(' AND ') }}) when matched then update set {% for col in update_columns if (col.upper() not in unique_key_list and col not in unique_key_list) -%} DBT_INTERNAL_DEST.{{ col }} = DBT_INTERNAL_SOURCE.{{ col }}{% if not loop.last %}, {% endif %} {% endfor -%} when not matched then insert({{ dest_cols_csv }}) values( {% for col in dest_columns -%} DBT_INTERNAL_SOURCE.{{ adapter.check_and_quote_identifier(col.name, model.columns) }}{% if not loop.last %}, {% endif %} {% endfor -%} ) {%- else -%} insert into {% if parallel %} /*+parallel({{ parallel }})*/ {% endif %} {{ target_relation }} ({{ dest_cols_csv }}) ( select {{ dest_cols_csv }} from {{ temp_relation }} ) {%- endif -%} {% endmacro %}

Describe alternatives you've considered

No response

Who will this benefit?

No response

Anything else?

No response

[Bug] get_relation reporting dbt found an approximate match

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Hello, I am the author of the dbt_constraints package and I am attempting to add support for dbt-oracle.

I am receiving the following error when my code runs adapter.get_relation():

Encountered an error:
Compilation Error in operation dbt_constraints-on-run-end-0 (./dbt_project.yml)
  When searching for a relation, dbt found an approximate match. Instead of guessing 
  which relation to use, dbt will move on. Please delete DBT_USER.dim_customers, or rename it to be less ambiguous.
  Searched for: dbt_user.dim_customers
  Found: DBT_USER.dim_customers

I have a schema named DBT_USER and I have specified the the schema in upper case in my profile.

I believe the root cause is how dbt-oracle changes the case of objects in your catalog.sql:

          select
              lower(tables.table_catalog) as "table_database",
              lower(tables.table_schema) as "table_schema",
              lower(tables.table_name) as "table_name",
              lower(tables.table_type) as "table_type",
              all_tab_comments.comments as "table_comment",
              lower(columns.column_name) as "column_name",
              ordinal_position as "column_index",
              lower(case
                when data_type like '%CHAR%'
                then data_type || '(' || cast(char_length as varchar(10)) || ')'
                else data_type
              end) as "column_type",
              all_col_comments.comments as "column_comment",
              tables.table_schema as "table_owner"
          from tables

I work for Snowflake and our database uses exactly the same conventions as Oracle for how identifiers are treated with and without quotes. Identifiers become upper case if not quoted and case sensitive if quoted. Therefore, I think it can be helpful to compare some of your code against the dbt-Labs code for dbt-Snowflake. In the dbt-Labs implementation of dbt-Snowflake they do not adjust the case of our databases, schemas, tables, or columns in the catalog.sql

There could also be an issue with the oracle__get_columns_in_relation macro:

      select
          lower(column_name) as "name",
          lower(data_type) as "type",
          char_length as "character_maximum_length",
          numeric_precision as "numeric_precision",
          numeric_scale as "numeric_scale"
      from columns

In the snowflake__get_columns_in_relation macro dbt-labs uses DESCRIBE TABLE to retrieve columns and they do not change the case of column names or data types (upper case just like Oracle).

I reviewed all the SQL lookups dbt-Labs uses for Snowflake and I consistently see they use UPPER() = UPPER() in their WHERE clauses but in all other cases their metadata lookups preserve the case for databases, schema, tables, and columns. I recommend dbt-Oracle should do the same.

Expected Behavior

No response

Steps To Reproduce

No response

Relevant log output using --debug flag enabled

No response

Environment

- OS: MacOS
- Python: 3.8.13
- dbt: 1.0.6

What Oracle database version are you using dbt with?

Oracle XE 21c

Additional Context

No response

[Bug] recreating table causes ORA-00955 name is already used by an existing object

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

if running dbt run -m test_table for the first time works fine. If this command is run again error is raised:
ORA-00955: name is already used by an existing object
Looking into log file, there are 2 SQLs executed:
CREATE TABLE *****.test_table__dbt_tmp as SELECT.......
ALTER TABLE ****.test_table__dbt_tmp rename to test_table
Error is after rename because test_table exists from first run, shouldn't it be dropped?

Expected Behavior

test_table should be dropped before renaming.

Steps To Reproduce

No response

Relevant log output using --debug flag enabled

No response

Environment

- OS: Red Hat 8.5.0-3
- Python: 3.8.12
- dbt: 1.2.1
- dbt-oracle: 1.2.1

What Oracle database version are you using dbt with?

12c

Additional Context

No response

upgrade to support dbt-core v1.3.0

Background

The latest release cut for 1.3.0, dbt-core==1.3.0rc2 was published on October 3, 2022 (PyPI | Github). We are targeting releasing the official cut of 1.3.0 in time for the week of October 16 (in time for Coalesce conference).

We're trying to establish a following precedent w.r.t. minor versions:
Partner adapter maintainers release their adapter's minor version within four weeks of the initial RC being released. Given the delay on our side in notifying you, we'd like to set a target date of November 7 (four weeks from today) for maintainers to release their minor version

Timeframe Date (intended) Date (Actual) Event
D - 3 weeks Sep 21 Oct 10 dbt Labs informs maintainers of upcoming minor release
D - 2 weeks Sep 28 Sep 28 core 1.3 RC is released
Day D October 12 Oct 12 core 1.3 official is published
D + 2 weeks October 26 Nov 7 dbt-adapter 1.3 is published

How to upgrade

dbt-labs/dbt-core#6011 is an open discussion with more detailed information, and dbt-labs/dbt-core#6040 is for keeping track of the community's progress on releasing 1.2.0

Below is a checklist of work that would enable a successful 1.2.0 release of your adapter.

  • Python Models (if applicable)
  • Incremental Materialization: cleanup and standardization
  • More functional adapter tests to inherit

ORA-00904: tmp columns less target columns

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

if tmp relation columns are less than target relation columns we get an error ORA-00904.

need to get the columns from tmp_relation and not the target_relation

{%- set dest_columns = adapter.get_columns_in_relation(target_relation) -%}

logs:
dbt1.log

Expected Behavior

08:02:37 1 of 1 START incremental model cdi_etl.customers................................ [RUN]
08:02:37 1 of 1 OK created incremental model cdi_etl.customers........................... [OK in 0.34s]
08:02:37 Finished running 1 incremental model in 1.23s.
08:02:37 Completed successfully

Steps To Reproduce

CREATE TABLE customers(
id NUMBER not null,
first_name VARCHAR2(20) not null,
last_name VARCHAR2(20) not null,
hashvalue VARCHAR2(100)
);

model:
{{ config(
materialized='incremental'
)
}}
SELECT
98589 as id,
'alex' as first_name,
'morti' as last_name
FROM dual

run model:
08:00:00 1 of 1 START incremental model cdi_etl.customers................................ [RUN]
08:00:00 oracle adapter: Oracle error: ORA-00904: "HASHVALUE": invalid identifier
08:00:00 1 of 1 ERROR creating incremental model cdi_etl.customers....................... [ERROR in 0.29s]

Relevant log output using --debug flag enabled

No response

Environment

- OS:macOS
- Python:3.9.7
- dbt_core: 1.0.8
- dbt_oracle:1.0.4

What Oracle database version are you using dbt with?

12.1.0

Additional Context

No response

Creation of Constraints fails

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

For some models I create primary keys with the following macro:

{% macro primary_key(this, column) %}
{% set index_name = 'PK_'+ this.name %}

DECLARE
    constraint_name_exists  EXCEPTION;
    PRAGMA EXCEPTION_INIT(constraint_name_exists, -2264);
    name_exists  EXCEPTION;
    PRAGMA EXCEPTION_INIT(name_exists, -955);
BEGIN
    EXECUTE IMMEDIATE 'alter table {{this.name}} add constraint {{index_name}} primary key ({{column}}) nologging';
END;
{% endmacro %}`

When running the model for the first time everything works fine, after the second run, the macro throws an exception ORA-02264.

Looks as if the constraints are not deleted before the table is recreated, so that there still exists an object in the database with the name of the constraint. But due to the renamining of the table from table__dbt_tmp to table, the constraint is not assigned to the table either

Expected Behavior

Delete all existing constraint and indexes during renaming of the table in oracle__rename_relation

{% macro oracle__rename_relation(from_relation, to_relation) -%}
    {% call statement('drop_constraints') -%}
    BEGIN
        FOR r IN (SELECT table_name, constraint_name FROM user_constraints WHERE table_name = upper('{{ from_relation.include(False, False, True).quote(schema=False, identifier=False) }}')) LOOP
            EXECUTE IMMEDIATE 'alter table ' || r.table_name || ' drop constraint ' || r.constraint_name || ' CASCADE';
        END LOOP;
    END;
  {%- endcall %}

  {% call statement('drop_indexes') -%}
    BEGIN
        FOR r IN (SELECT table_name, index_name FROM user_indexes WHERE table_name = upper('{{ from_relation.include(False, False, True).quote(schema=False, identifier=False) }}')) LOOP
            EXECUTE IMMEDIATE 'DROP INDEX ' || r.index_name;
        END LOOP;
    END;
  {%- endcall %}

  {% call statement('rename_relation') -%}
    ALTER {{ from_relation.type }} {{ from_relation }} rename to {{ to_relation.include(schema=False) }}
  {%- endcall %}
{% endmacro %}

Steps To Reproduce

No response

Relevant log output using --debug flag enabled

No response

Environment

- OS:Windows
- Python: 3.8.10
- dbt: 1.2.1

What Oracle database version are you using dbt with?

21c

Additional Context

No response

upgrade to support dbt-core v1.3.0

Background

The latest release cut for 1.3.0, dbt-core==1.3.0rc2 was published on October 3, 2022 (PyPI | Github). We are targeting releasing the official cut of 1.3.0 in time for the week of October 16 (in time for Coalesce conference).

We're trying to establish a following precedent w.r.t. minor versions:
Partner adapter maintainers release their adapter's minor version within four weeks of the initial RC being released. Given the delay on our side in notifying you, we'd like to set a target date of November 7 (four weeks from today) for maintainers to release their minor version

Timeframe Date (intended) Date (Actual) Event
D - 3 weeks Sep 21 Oct 10 dbt Labs informs maintainers of upcoming minor release
D - 2 weeks Sep 28 Sep 28 core 1.3 RC is released
Day D October 12 Oct 12 core 1.3 official is published
D + 2 weeks October 26 Nov 7 dbt-adapter 1.3 is published

How to upgrade

dbt-labs/dbt-core#6011 is an open discussion with more detailed information, and dbt-labs/dbt-core#6040 is for keeping track of the community's progress on releasing 1.2.0

Below is a checklist of work that would enable a successful 1.2.0 release of your adapter.

  • Python Models (if applicable)
  • Incremental Materialization: cleanup and standardization
  • More functional adapter tests to inherit

invalidate_hard_deletes=True directive does not work

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

If we have the following snapshot's confg:

{{
    config(
      target_database='psbi',
      target_schema='dwh',
      unique_key='ID',

      strategy='check',
      check_cols='all',
      invalidate_hard_deletes=True
    )
}}

and run

dbt snapshot

we obtain the following error:

04:46:44  Database Error in snapshot fct_piano_car (snapshots/fct_piano_car.sql)
04:46:44    ORA-00905: missing keyword

Expected Behavior

No response

Steps To Reproduce

No response

Relevant log output using --debug flag enabled

No response

Environment

- OS: Ubuntu 20.04
- Python:  3.8.10
- dbt: 1.1.1
- dbt-oracle: 1.1.1

What Oracle database version are you using dbt with?

19c

Additional Context

none

upgrade to support dbt-core v1.2.0

We've just published the release cut of dbt-core 1.2.0, dbt-core 1.2.0rc1 (PyPI | GitHub release notes).

dbt-labs/dbt-core#5468 is an open discussion with more detailed information, and dbt-labs/dbt-core#5474 is for keeping track of the communities progress on releasing 1.2.0

Below is a checklist of work that would enable a successful 1.2.0 release of your adapter.

  • migrate necessary cross-db macros into adapter and ensure they're tested accordingly
  • remove any copy-and-pasted materialization (if your adapter inherits from another adapter)
  • add new basic tests BaseDocsGenerate and BaseDocsGenReferences
  • consider checking and testing support for Python 3.10

dbt-labs/dbt-core#5432 might make it into the second release cut in the next week, in which case, you'll also might want to:

  • implement method and tests for connection retry logic

[Feature] release `dbt-oracle` `v1.1.0`?

Describe the Feature

It'd be great to let Oracle users be able to use dbt-core 1.1.0. Additionally, it's be great as partners to be ensure that you are migrated to the unit test suite. This will be the foundation for us to ensure a good UX for end users moving forward.

Describe alternatives you've considered

No response

Who will this benefit?

No response

Anything else?

No response

[Feature] Adding WHERE codintion during MERGE update

Hi,

It would be nice to be able to controll the incremental merge strategy by adding a WHERE clause to the WHEN MATCHED THEN part of the merge statement. As i see on dbt-oracle github it seems the adapter doesn't support it.

ON (src.id = tgt.id AND src.date = tgt.date)
WHEN MATCHED THEN
    UPDATE SET
        tgt.end_date = src.end_date
    WHERE
        src.end_date <> tgt.end_date
WHEN NOT MATCHED THEN
    INSERT (tgt.id, tgt.date, tgt.end_date)
    VALUES (src.id, src.date, src.end_date)

I got recommended by #db-oracle on dbt slack to raise a feature request so it gets tracked.

Describe alternatives you've considered

This is the repons i got from #db-oracle slack channel from @aosingh:

dbt-oracle implements the functionality conforming to the standard defined by dbt.
However, I see that you need support for an arbitrary where clause condition during MERGE update. We will need to think about the interface to implement this. One way could be to introduce a dbt-oracle specific parameter. The challenge is to make this a generic config which can work for any where condition. The where condition can be on the target data or source data or both.
You can also raise a feature request here - https://github.com/oracle/dbt-oracle/issues so that this gets tracked

Who will this benefit?

No response

Anything else?

No response

dbt-oracle 1.3.1 problem "oracle adapter: Oracle error: ORA-00972: identifier is too long" on dbt snapshot

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Same project:

dbt-oracle 1.3.0 Works fine

root@new-airflow-dbt:~/airflow/dags/SALES_ORDERS_MANAGEMENT/snapshots# dbt snapshot
05:54:07  oracle adapter: Running in cx mode
05:54:08  Running with dbt=1.3.0
05:54:08  Found 1 model, 0 tests, 3 snapshots, 0 analyses, 324 macros, 2 operations, 0 seed files, 15 sources, 0 exposures, 0 metrics
05:54:08  
05:54:08  
05:54:08  Running 1 on-run-start hook
05:54:08  1 of 1 START hook: dbt_project.on-run-start.0 .................................. [RUN]
05:54:08  1 of 1 OK hook: dbt_project.on-run-start.0 ..................................... [OK in 0.00s]
05:54:08  
05:54:08  Concurrency: 8 threads (target='prod')
05:54:08  
05:54:08  1 of 3 START snapshot psofa.sales_orders ....................................... [RUN]
05:54:08  2 of 3 START snapshot psofa.sales_orders_appog ................................. [RUN]
05:54:08  3 of 3 START snapshot psofa.uniters_orders ..................................... [RUN]
05:54:14  1 of 3 OK snapshotted psofa.sales_orders ....................................... [OK in 5.32s]
05:54:14  2 of 3 OK snapshotted psofa.sales_orders_appog ................................. [OK in 5.90s]
05:54:22  3 of 3 OK snapshotted psofa.uniters_orders ..................................... [OK in 13.28s]
05:54:22  
05:54:22  Running 1 on-run-end hook
05:54:22  1 of 1 START hook: dbt_project.on-run-end.0 .................................... [RUN]
05:54:22  1 of 1 OK hook: dbt_project.on-run-end.0 ....................................... [OK in 0.00s]
05:54:22  
05:54:22  
05:54:22  Finished running 3 snapshots, 2 hooks in 0 hours 0 minutes and 14.15 seconds (14.15s).
05:54:22  
05:54:22  Completed successfully
05:54:22  
05:54:22  Done. PASS=3 WARN=0 ERROR=0 SKIP=0 TOTAL=3```

dbt-oracle 1.3.1 gives the following output:

root@new-airflow-dbt:~/airflow/dags/SALES_ORDERS_MANAGEMENT/snapshots# dbt snapshot
05:43:33 oracle adapter: Running in cx mode
05:43:33 Running with dbt=1.3.1
05:43:33 Found 1 model, 0 tests, 3 snapshots, 0 analyses, 325 macros, 2 operations, 0 seed files, 15 sources, 0 exposures, 0 metrics
05:43:33
05:43:34
05:43:34 Running 1 on-run-start hook
05:43:34 1 of 1 START hook: dbt_project.on-run-start.0 .................................. [RUN]
05:43:34 1 of 1 OK hook: dbt_project.on-run-start.0 ..................................... [OK in 0.00s]
05:43:34
05:43:34 Concurrency: 8 threads (target='prod')
05:43:34
05:43:34 1 of 3 START snapshot psofa.sales_orders ....................................... [RUN]
05:43:34 2 of 3 START snapshot psofa.sales_orders_appog ................................. [RUN]
05:43:34 3 of 3 START snapshot psofa.uniters_orders ..................................... [RUN]
05:43:34 oracle adapter: Oracle error: ORA-00972: identificativo troppo lungo
05:43:34 3 of 3 ERROR snapshotting psofa.uniters_orders ................................. [ERROR in 0.46s]
05:43:35 oracle adapter: Oracle error: ORA-00972: identificativo troppo lungo
05:43:35 2 of 3 ERROR snapshotting psofa.sales_orders_appog ............................. [ERROR in 0.84s]
05:43:39 1 of 3 OK snapshotted psofa.sales_orders ....................................... [OK in 4.96s]
05:43:39
05:43:39 Running 1 on-run-end hook
05:43:39 1 of 1 START hook: dbt_project.on-run-end.0 .................................... [RUN]
05:43:39 1 of 1 OK hook: dbt_project.on-run-end.0 ....................................... [OK in 0.00s]
05:43:39
05:43:39
05:43:39 Finished running 3 snapshots, 2 hooks in 0 hours 0 minutes and 5.84 seconds (5.84s).
05:43:39
05:43:39 Completed with 2 errors and 0 warnings:
05:43:39
05:43:39 Database Error in snapshot uniters_orders (snapshots/uniters_orders.sql)
05:43:39 ORA-00972: identificativo troppo lungo
05:43:39
05:43:39 Database Error in snapshot sales_orders_appog (snapshots/sales_orders_appog.sql)
05:43:39 ORA-00972: identificativo troppo lungo
05:43:39
05:43:39 Done. PASS=1 WARN=0 ERROR=2 SKIP=0 TOTAL=3```

Expected Behavior

It is logical to expect the same behaviour.

Steps To Reproduce

  1. pip install --upgrade dbt-oracle==1.3.1
  2. dbt snapshot

Relevant log output using --debug flag enabled

root@new-airflow-dbt:~/airflow/dags/SALES_ORDERS_MANAGEMENT/snapshots# dbt debug
05:58:19  Running with dbt=1.3.1
dbt version: 1.3.1
python version: 3.8.10
python path: /root/dbt/bin/python3
os info: Linux-5.4.17-2136.307.3.5.el8uek.x86_64-x86_64-with-glibc2.29
Using profiles.yml file at /root/.dbt/profiles.yml
Using dbt_project.yml file at /root/airflow/dags/SALES_ORDERS_MANAGEMENT/dbt_project.yml

05:58:19  oracle adapter: Running in cx mode
Configuration:
  profiles.yml file [OK found and valid]
  dbt_project.yml file [OK found and valid]

Required dependencies:
 - git [OK found]

Connection:
  user: psofa
  database: psdb
  schema: psofa
  protocol: tcp
  host: 172.16.3.32
  port: 1521
  tns_name: PSDBPROD
  service: psdb
  connection_string: None
  shardingkey: []
  supershardingkey: []
  cclass: None
  purity: None
  retry_count: 1
  retry_delay: 3
  Connection test: [OK connection ok]

All checks passed!

Environment

- OS: Ubuntu 20.04.5 LTS
- Python: Python 3.8.10
- dbt: 1.3.1

What Oracle database version are you using dbt with?

11.2.0.4

Additional Context

No response

[Bug] Incremental upsert strategy fails with ORA-00911 & ORA-01747

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

If a table contains columns names like for example

  • "DESC"
  • "_AIRBYTE_AB_ID"
  • "_AIRBYTE_EMITTED_AT"
  • "_AIRBYTE_NORMALIZED_AT"

the dbt-oracle incremental strategy fails with
ORA-01747: invalid user.table.column, table.column, or column specification for example the column name desc -> "DESC"
or
ORA-00911: invalid character which are required internal Airbyte system columns e.g. "_AIRBYTE_AB_ID" or "_AIRBYTE_NORMALIZED_AT".

This is due to the fact that the following macro call returns the columns lowercase and unquoted instead of uppercase and quoted, no matter if the OracleQuotePolicy for schema and identifier is set to true or not.

{% macro oracle_incremental_upsert(tmp_relation, target_relation, unique_key=none, statement_name="main") %}
{%- set dest_columns = adapter.get_columns_in_relation(target_relation) -%}

Set schema and identifier to true

quoting:
database: false
schema: false
identifier: false

quoting:
database: false
identifier: false
schema: false

The two for loops that fill the columns for the two when matched then update set and when not matched then insert(...) values(...) are causing the issue, as they produce for example:

merge into staging.ln_tfgld008 target
  using staging.o$pt_ln_tfgld008125639 temp
  on (temp."_AIRBYTE_AB_ID" = target."_AIRBYTE_AB_ID")
when matched then
  update set
    target.bloc = temp.bloc,
    target.desc = temp.desc, -- desc is not an issue as long it is quoted "DESC"
    target.timestamp = temp.timestamp, -- timestamp suprisingly is not an issue and does not trigger ORA-01747
    target._airbyte_ab_id = temp._airbyte_ab_id, -- triggers ORA-00911 without quoting
    target._airbyte_emitted_at = temp._airbyte_emitted_at, -- triggers ORA-00911 without quoting
    target._airbyte_normalized_at = temp._airbyte_normalized_at, -- triggers ORA-00911 without quoting
    target._airbyte_ln_tfgld008_hashid = temp._airbyte_ln_tfgld008_hashid -- triggers ORA-00911 without quoting
when not matched then
  insert(bloc, desc, timestamp, _airbyte_ab_id, _airbyte_emitted_at, _airbyte_normalized_at, _airbyte_ln_tfgld008_hashid)
  values(
      temp.bloc,
      temp.desc,
      temp.username,
      temp.timestamp,
      temp._airbyte_ab_id,
      temp._airbyte_emitted_at,
      temp._airbyte_normalized_at,
      temp._airbyte_ln_tfgld008_hashid
  )

Expected Behavior

The dbt-oracle incremental upsert strategy should not fail for users that do not have control over the column names of the tables and the column names.
If a column uses a reserved keyword or starts with an invalid character dbt-oracle by default should treat the columns to be quoted and uppercase.

Steps To Reproduce

  1. Set the schema and identifier flag to true
    quoting:
    database: false
    identifier: false
    schema: false
  2. Add one of the above columns that trigger the ORA error to https://github.com/oracle/dbt-oracle/blob/7348c2b21883c156583adc8282876c7a993eb13d/dbt_adbs_test_project/seeds/seed.csv
  3. Add the column to the incremental materialization test https://github.com/oracle/dbt-oracle/blob/7348c2b21883c156583adc8282876c7a993eb13d/dbt_adbs_test_project/models/us_product_sales_channel_ranking.sql
  4. Run the tests https://github.com/oracle/dbt-oracle/blob/7348c2b21883c156583adc8282876c7a993eb13d/dbt_adbs_test_project/README.md

Relevant log output using --debug flag enabled

No response

Environment

- OS: Oracle Linux Server 8.6
- Python: 3.9.13
- dbt: 1.0.6

What Oracle database version are you using dbt with?

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production

Additional Context

No response

Upgrade dbt-core v1.2.2

Describe the Feature

Upgrade dependencies

Bump dbt-oracle's version

[Bug] don't drop temporary table o$pt

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

The global temporary table is not deleted after running the model due to an

[dbt-oracle/blob/main/dbt/include/oracle/macros/adapters.sql](

pragma EXCEPTION_INIT(attempted_ddl_on_in_use_GTT, -14452);
)

example:
create global temporary table TEST (object_id number, object_name varchar2(30)) on commit PRESERVE rows;

Such tables get bound to the session with the first insert and then this binding is ONLY removed when the session exits or by truncate. If the binding exists DDL cannot be performed on these temporary tables and if you try a DDL on such tables you would receive ORA-14452 attempt to create, alter or drop an index on temporary table already in use.

add before

{% for rel in to_drop %}
{% do adapter.truncate_relation(rel) %}
{% endfor %}

Expected Behavior

drop table

Steps To Reproduce

  1. create model materialized='incremental'

Relevant log output using --debug flag enabled

No response

Environment

- OS:MacOs
- Python:3.9
- dbt-core: 1.1.0

What Oracle database version are you using dbt with?

12.1.0

Additional Context

No response

[Bug] Renaming the old_relation does not work in Oracle 12c+ when using DBT table materialization with missing database name in profiles.yml

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

1 of 1 START table model staging.test.......................................................................... [RUN]
1 of 1 ERROR creating table model staging.test................................................................. [ERROR in 0.96s]
oracle adapter: Oracle error: ORA-00955: name is already used by an existing object

Finished running 1 table models in 0.96s.

Completed with 1 errors and 0 warnings:

Database Error in model test (models/generated/airbyte_tables/staging/test.sql)
  ORA-00955: name is already used by an existing object
  compiled SQL at ../build/run/airbyte_utils/models/generated/airbyte_tables/staging/test.sql

Done. PASS=0 WARN=0 ERROR=1 SKIP=0 TOTAL=1

Expected Behavior

1 of 1 OK created table model staging.test..................................................................... [OK in 1.45s]

Finished running 1 table models in 1.45s.

Completed successfully

Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1

Steps To Reproduce

Run the DBT table materialization twice.

Relevant log output using --debug flag enabled

No response

Environment

- OS: Oracle Linux Server 8.5
- Python: 3.9.9
- dbt: 1.0.6

What Oracle database version are you using dbt with?

Tested using

  • Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
    and
  • Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Additional Context

Pull request fix #3

[Feature] oracle__make_temp_relation identifier uniqueness improvements

Describe the Feature

Hello. At the moment temp relation unique identifier has the date string with "%H%M%S", but when we are making dbt run in parallel for the same model with incremental materialization it can cause an issue when a temporary table already exists. It's known that officially dbt doesn't support parallel runs but in some cases it may be quite useful. Is it possible to add milliseconds "%H%M%S%f" or maybe replace that identifier with something else to make it more robust?

Describe alternatives you've considered

No response

Who will this benefit?

No response

Anything else?

No response

[Bug] null values should be of type null and not of type string in manifest.json

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Null values are rendered as proprietary "None" string fields in manifest.json.

https://getdbt.slack.com/archives/C01PWH4TXLY/p1677050123391359

Expected Behavior

Null values are rendered as valid JSON nulls in manifest.json

Steps To Reproduce

dbt run

for version 1.3.2 of dbt-oracle

Relevant log output using --debug flag enabled

No response

Environment

- OS: linux
- Python: 3.10
- dbt: 1.3.2

What Oracle database version are you using dbt with?

11

Additional Context

No response

[Bug] Wrong database name when connect to Autonomous Data Warehouse

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I found database param that is service name + database name, like that hclpszeflasrnfx_unnbpfflyfh4xboz, so dbt can't find old relation objects.

relation.py

@classmethod
    def create(
        cls: Type[Self],
        database: Optional[str] = None,
        schema: Optional[str] = None,
        identifier: Optional[str] = None,
        type: Optional[RelationType] = None,
        **kwargs,
    ) -> Self:
        kwargs.update(
            {
                "path": {
                    "database": database,
                    "schema": schema,
                    "identifier": identifier,
                },
                "type": type,
            }
        )
        return cls.from_dict(kwargs)

Expected Behavior

No response

Steps To Reproduce

No response

Relevant log output using --debug flag enabled

No response

Environment

- OS: MACOS
- Python: 3.9
- dbt: 1.5.0

What Oracle database version are you using dbt with?

19c

Additional Context

No response

[Bug] Column.data_type does not take into account the data_length

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

When using column.data_type for columns of type varchar or numeric, the adapters just gives back the column_type, without the size / data_lengths. You can see this behaviour for example when using dbt_utils.union:

grafik

Thsi results in ORA-00906: missing left paranthesis

Expected Behavior

Return the data_lengths/size in parathesis:
grafik

This can be done by overiting the following methods:
`
@DataClass
class OracleColumn(Column):

def is_string(self):
    # in Oracle, 'varchar2' is also a string:
    if self.dtype.lower() in ['varchar2', 'char', 'varchar']:
        return True
    return super().is_string()

@classmethod
def string_type(cls, size: int) -> str:
    return "varchar2({})".format(size)

def is_numeric(self) -> bool:
    # in Oracle, 'number' is also a numeric:
    if self.dtype.lower() in ['number', 'float']:
        return True
    return super().is_numeric()

`

Steps To Reproduce

No response

Relevant log output using --debug flag enabled

No response

Environment

- OS: Windows
- Python: Python 3.8.10
- dbt: 1.1.1

What Oracle database version are you using dbt with?

19c

Additional Context

No response

[Bug] multi schema materializations not supported without administrative privileges

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

multi schema materializations not supported without administrative privileges

Expected Behavior

DBT should be able to populate pre created schemata without administrative permissiones required to execute a CREATE table in these schematas.

A proxy connect is not a viable workaround as it is explicit and cannot implicitly be inferred - as far as I understand.

Steps To Reproduce

DBT can nicely generate schemata in other databases (postgres, snowflake, ...). However, this is tricky as my DBT account has no administrative permissions. Rather, it is confined to 3 schemata a_raw, b_staging, c_mart. However, while my user foo foo[a_raw] has proxy-connect privileges to all a,b,c schemata DBT only has a single target (i.e. foo[a_raw] ) that can be registered. This means I would need to break down the dbt run into 3 separate callls to the cli and also that the nice selector for upstream models no longer works due to insufficient permissions. How can I change the situation to allow the dbt user (foo) to read/write all a,b,c schemata? My DBA is telling me that Oracle cannot grant CREATE permissions to any other user besides the specific schema user (and administrators) but he is not going to hand me administrative privileges. What can I do to return back to a dbt native situation?

This means when changing the target user from foo[a_raw] to foo[b_staging] to foo[c_mart] (explicitly) CREATE permissions are there. Otherwise not.

Relevant log output using --debug flag enabled

No response

Environment

- OS: windows
- Python: 3.10
- dbt: 1.3.1

What Oracle database version are you using dbt with?

11 and hopefully soon 19c

Additional Context

No response

[Bug] ORA-00942 insert data to another schema

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

test model:

`{{ config(
materialized='incremental',
schema='rt_ailin'
)
}}

SELECT
11235 as id,
'alex' as first_name,
'morti' as last_name
FROM dual`

data is not inserted into another schema

START incremental model rt_ailin.customers............................... [RUN]
oracle adapter: Oracle error: ORA-00942: table or view does not exist
1 of 1 ERROR creating incremental model rt_ailin.customers...................... [ERROR in 0.33s]

logs:
another_schema.log

The error is due to the fact that the script replaces the current schema, and the schema is not needed because the global temporary table is created in the current schema anyway

{% set tmp_relation = base_relation.incorporate(
path={"identifier": tmp_identifier}) -%}

fix:
{% set tmp_relation = base_relation.incorporate( path={"identifier": tmp_identifier, "schema": None}) -%}

logs after fix
dbt.log

Expected Behavior

START incremental model rt_ailin.customers............................... [RUN]
OK created incremental model rt_ailin.customers.......................... [OK in 0.49s]

Steps To Reproduce

No response

Relevant log output using --debug flag enabled

No response

Environment

- OS:MacOs
- Python:3.9
- dbt-core: 1.0.7

What Oracle database version are you using dbt with?

12.1.0

Additional Context

No response

Ephemeral materialization not working as expected

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Running sql query with ephemeral materialization returns nested WITH clauses.

WITH dbt__cte__name__ as (
WITH
source AS (
SELECT * FROM foo_schema.foo_table
),

Expected Behavior

The expected behavior is a linear, not nested, list of CTE clauses.

Steps To Reproduce

No response

Relevant log output using --debug flag enabled

No response

Environment

- OS:
- Python: 3.10
- dbt: 1.3.2rc1

What Oracle database version are you using dbt with?

No response

Additional Context

No response

[Feature] Relax version constraint for dbt-core

Describe the Feature

The version constraint for dbt-core is currently set to a specific patch release e.g. dbt-core==1.4.6 which I find a bit too restrictive. It would make it easier to updating dbt-core without the need to wait for an update of dbt-oracle if the version constraint was relaxed a bit to allow for any patch release, e.g. dbt-core~=1.4.

Describe alternatives you've considered

No response

Who will this benefit?

No response

Anything else?

No response

[Bug] Append incremental strategy does not work due to typo

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Building an incremental model with append strategy fails with message:

Compilation Error in macro oracle__get_incremental_append_sql (macros/materializations/incremental/strategies.sql)
09:10:25    'args_dict' is undefined
09:10:25    
09:10:25    > in macro get_incremental_append_sql (macros/materializations/models/incremental/strategies.sql)
09:10:25    > called by macro materialization_incremental_oracle (macros/materializations/incremental/incremental.sql)
09:10:25    > called by macro oracle__get_incremental_append_sql (macros/materializations/incremental/strategies.sql)

Expected Behavior

I expect the model to build with the append-strategy.

There is a typo in strategies.sql

Relevant section:

{% macro oracle__get_incremental_append_sql(arg_dict) %}
    {%- set dest_columns = args_dict["dest_columns"] -%}
    {%- set temp_relation = args_dict["temp_relation"] -%}
    {%- set target_relation = args_dict["target_relation"] -%}
    {%- set dest_column_names = dest_columns | map(attribute='name') | list -%}
    {%- set dest_cols_csv = get_quoted_column_csv(model, dest_column_names)  -%}
    INSERT INTO {{ target_relation }} ({{ dest_cols_csv }})
    (
       SELECT {{ dest_cols_csv }}
       FROM {{ temp_relation }}
    )
{% endmacro %}

Replace arg_dict with args_dict (line 95 in source file).

Steps To Reproduce

  1. Create model with:
{{
    config(
        materialized='incremental',
        incremental_strategy = 'append'
    )
}}

--- Query

Relevant log output using --debug flag enabled

No response

Environment

- OS: Ubuntu 18.04
- Python: 3.10.7
- dbt: 1.3.1

What Oracle database version are you using dbt with?

11.2

Additional Context

No response

[Feature] Docs should mention how to install Oracle client libraries

Describe the Feature

This repo's README.md as well as the Oracle profile setup page on docs.getdbt.com do not mention how to install the client libraries or even link to the standard Oracle doc on how to do so -- they should as its an important step that users have to take!

Describe alternatives you've considered

If #17 , is addressed, then perhaps information will no longer be needed!

Who will this benefit?

All new users to dbt-oracle!

Anything else?

No response

Error while passing schema inside the model using {{ config(materialized='table',schema='schema')}}

Hi ,

  • if i pass schema inside my model either as config parameter({{ config(materialized='table',schema='schema')}}) or as source parameter ({{source('schema','table')}}) , the model fails with the following error.
    Model 'model.Test_project.model_name' (models/pod_os_stats/model_name.sql) depends on a source named 'schema.table' which was not found
  • if i remove the schema from source parameter and keep it in the config getting below error
    dbt.exceptions.CompilationException: Compilation Error in macro oracle__create_schema (macros/adapters.sql)
    'relation' is undefined
    in macro create_schema (macros/adapters/schema.sql)
    called by macro oracle__create_schema (macros/adapters.sql)
  • if i remove the schema from both config and source parameter simply use the table name it succeeds.

[Bug] Flaky generic tests on multiple threads

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Installing dbt-oracle 1.4.1 I suffer from dbt-labs/dbt-core#6885 (Flaky generic tests on multiple threads), because it hardly depends on dbt-core 1.4.4.

This issue is already fixed in dbt-core 1.4.5.

Could you bump the dependency from dbt-core from 1.4.4 to 1.4.5 (just as you did in #76)?

Expected Behavior

generic tests work using multiple threads

Steps To Reproduce

  1. Install dbt-oracle 1.4.1 (and thus dbt-core 1.4.4).
  2. Add a generic test.
  3. Use multiple threads.
  4. Run the tests.

(see dbt-labs/dbt-core#6885 for details)

Relevant log output using --debug flag enabled

No response

Environment

- OS: Linux
- Python: 3.8.10
- dbt: 1.4.4

What Oracle database version are you using dbt with?

No response

Additional Context

No response

upgrade to support dbt-core v1.2.0

We've just published the release cut of dbt-core 1.2.0, dbt-core 1.2.0rc1 (PyPI | GitHub release notes).

dbt-labs/dbt-core#5468 is an open discussion with more detailed information, and dbt-labs/dbt-core#5474 is for keeping track of the communities progress on releasing 1.2.0

Below is a checklist of work that would enable a successful 1.2.0 release of your adapter.

  • migrate necessary cross-db macros into adapter and ensure they're tested accordingly
  • remove any copy-and-pasted materialization (if your adapter inherits from another adapter)
  • add new basic tests BaseDocsGenerate and BaseDocsGenReferences
  • consider checking and testing support for Python 3.10
  • managing access grants

dbt-labs/dbt-core#5432 might make it into the second release cut in the next week, in which case, you'll also might want to:

  • implement method and tests for connection retry logic

[Feature] affected row count

Describe the Feature

I would like to see a feature that shows affected row count, similar to sql%rowcount from PL/SQL. I'm aware of the variable "rows_affected" in run_results.json but currently it is always showing 0.

Describe alternatives you've considered

No response

Who will this benefit?

No response

Anything else?

No response

[Feature] Provide option to only update specific columns in incremental model

Describe the Feature

When building data models using dbt-oracle, currently for an incremental model, there is no optional way to update only the columns I want. The adapter will update all columns except unique_keys. (I just checked out the code and found it, so if it has been satisfied before, please let me know).

For example, my data model follows the Fact-Dimension model, and here is an example Dimension table:
Table: D_MODEL
Columns: ID, MODEL_NAME, UPDATED_DATE

As we can see, the unique_key ID in the Dimension table is just used to join the Fact table. But, the merge on columns is MODEL_NAME, it is not the primary key of Dimension tables.

So, in this case, I only want to update the column UPDATED_DATE, but there is no config to finalize them. Dbt-oracle update column ID and UPDATE_DATE, and the whole Fact-Dim model seem broken because it can not join again.

Describe alternatives you've considered

Recently, I have worked with dbt-bigquery and they have the option merge_update_columns docs to update only columns they want.

I also read the merge_update_columns source code and found they customized the BaseConfig with BigqueryConfig, then use them on Jinja code too to filter out targeted columns.

So, our problem can be resolved by just creating an additional argument and logic, seem easy!

Who will this benefit?

I thought it is very useful for a lot of users who built special data models with unexpected behavior and make dbt-oracle more fluently.

Anything else?

Maybe if I have time, I will experiment on them. Also, can I create a pull request, or contribute to this project?

[Bug] "dbt deps" fails with ModuleNotFoundError: No module named 'pytz'

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

After installting dbt-oracle 1.4.0 I get the following error when running
dbt deps:

Traceback (most recent call last):
  File "/usr/local/bin/dbt", line 5, in <module>
    from dbt.main import main
  File "/usr/local/lib/python3.10/site-packages/dbt/main.py", line 24, in <module>
    import dbt.task.build as build_task
  File "/usr/local/lib/python3.10/site-packages/dbt/task/build.py", line 1, in <module>
    from .run import RunTask, ModelRunner as run_model_runner
  File "/usr/local/lib/python3.10/site-packages/dbt/task/run.py", line 8, in <module>
    from .compile import CompileRunner, CompileTask
  File "/usr/local/lib/python3.10/site-packages/dbt/task/compile.py", line 4, in <module>
    from .runnable import GraphRunnableTask
  File "/usr/local/lib/python3.10/site-packages/dbt/task/runnable.py", line 11, in <module>
    from .printer import (
  File "/usr/local/lib/python3.10/site-packages/dbt/task/printer.py", line 22, in <module>
    from dbt.tracking import InvocationProcessor
  File "/usr/local/lib/python3.10/site-packages/dbt/tracking.py", line 26, in <module>
    import pytz
ModuleNotFoundError: No module named 'pytz'
The command '/bin/sh -c dbt deps' returned a non-zero code: 1

Expected Behavior

No response

Steps To Reproduce

Copy

FROM python:3.10
RUN pip install --upgrade pip
RUN pip install dbt-oracle==1.4.0
RUN dbt deps

into a file Dockerfile in an otherwise empty directory and run docker build .

Relevant log output using --debug flag enabled

No response

Environment

- OS: Linux
- Python: 3.10
- dbt: 1.4.0

What Oracle database version are you using dbt with?

No response

Additional Context

This was actually a bug in dbt-core (dbt-labs/dbt-core#7075), which is fixed in dbt-core 1.4.4.

Suggestion: Can you loosen the hard dependency on dbt-core 1.4.0 or replace it by dbt-core 1.4.4?

[Bug] Unexpexted is_incremental() macro behavior after running model in full-refresh mode

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

After running a model with the command dbt run -m stg_payments --full-refresh I'm encountering unexpected behavior.

Model stg_payments.sql looks like this:

{{
  config(
    materialized = 'incremental',
    )
}}

with payments as (
select
    id as payment_id,
    orderid as order_id,
    paymentmethod as payment_method,
    status,

    -- amount is stored in cents, convert it to dollars
    amount / 100 as amount,
    created as created_at

from {{ source('stripe', 'payment_src') }}
{% if is_incremental %}
  where created > (select max(created_at) from {{this}})
{% endif %}
) 

select * from payments

Per DBT's documentation is_incremental() macro should return True only if dbt is not running in full-refresh mode.

Compiled sql in dbt.log looks like this:

  create  table my_schema.stg_payments
  
  as
    

with payments as (
select
    id as payment_id,
    orderid as order_id,
    paymentmethod as payment_method,
    status,

    -- amount is stored in cents, convert it to dollars
    amount / 100 as amount,
    created as created_at

from schema.payment_src


  where created > (select max(created_at) from my_schema.stg_payments)

) 

select * from payments

Because of this I'm getting an error: ORA-00942: table or view does not exist

Expected Behavior

Expected is_incremental() macro to be ignored.

Steps To Reproduce

dbt run -m stg_payments --full-refresh

Relevant log output using --debug flag enabled

No response

Environment

- OS: Windows-10-10.0.19041-SP0
- Python: 3.9.0
- dbt: 1.3.0

What Oracle database version are you using dbt with?

Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit

Additional Context

No response

[Feature] upgrade driver from `cx_oracle` to new `oracledb` package

Describe the Feature

I just saw on the cx_oracle homepage, that the package has been renamed to oracledb (PyPI | GitHub) as of May 25 2022. Beyond upgrading just to say current with the newest major versions, I saw that oracle/python-oracledb (oracledb) has a "thin client" mode which may obviate the need for users to install drivers as came up in Sam Fintz's issue in #db-oracle.

Might be worth looking into!

Describe alternatives you've considered

No response

Who will this benefit?

No response

Anything else?

No response

DBT Snapshot filling in dbt_valid_to for most recent record

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

After running the dbt snapshot command and building a snapshot table with the invalidate_hard_deletes parameter set to True, various records are having their dbt_valid_to dates populated with the latest dbt_updated_at value, even though these records are still considered valid and have not been removed from their source (staging) tables. We are using the following code to build each snapshot:

{% snapshot tmp_device_ad %}

{{
    config(
           target_schema = 'dbt',
           strategy = 'timestamp',
           unique_key = 'snap_id',
           updated_at = 'update_dt',
           invalidate_hard_deletes = True
          )
}}

select ad.*

  from {{ ref('stg_device_ad') }} ad

{% endsnapshot %}

The number of records this is affecting is inconsistent between snapshots (sometimes it's a few dozen, or a few hundred from tables up to 30k), but it appears that it is affecting the same records on each run.

Expected Behavior

When building a dbt snapshot table and setting the invalidate_hard_deletes parameter to True, I expect dbt to only fill in the dbt_valid_to value ONLY if a record no longer exists in the source table OR if a record has changed in the source table and a new record is created in the snapshot table and the previous record in the snapshot table is marked as invalid. An active record in the snapshot table SHOULD NOT be marked as having a dbt_valid_to date, instead it should have a value of null.

Steps To Reproduce

  1. Environment: Windows 10 Enterprise 21H2 19044.2251
  2. Config:
{% snapshot tmp_device_ad %}

{{
    config(
           target_schema = 'dbt',
           strategy = 'timestamp',
           unique_key = 'snap_id',
           updated_at = 'update_dt',
           invalidate_hard_deletes = True
          )
}}

select ad.*

  from {{ ref('stg_device_ad') }} ad

{% endsnapshot %}
  1. Run: dbt snapshot
  2. No errors are generated

Relevant log output using --debug flag enabled

No response

Environment

- OS: Windows 10 Enterprise 21H2 19044.2251
- Python: 3.9.11
- dbt: 1.3.0

What Oracle database version are you using dbt with?

19c

Additional Context

This issue looks to be similar, if not identical, to #2390 from dbt-core a few years ago that was resolved in v0.17.0. I've created a fork to play around with this and see if the two issues are related.

upgrade to support dbt-core v1.6.0

Background

Minor version v1.6 is targeted for final release on July 27, 2023. As a maintainer of a dbt adapter, we strongly encourage you to release a corresponding minor version increment to ensure users of your adapter can make use of this new minor version.

How to upgrade

dbt-labs/dbt-core#7958 is an open discussion with more detailed information. If you have questions, please put them there!

The above linked guide has more information, but below is a high-level checklist of work that would enable a successful 1.6.0 release of your adapter:

### Tasks
- [ ] SUPPORT: materialized views
- [ ] SUPPORT: new `clone` command
- [ ] BEHIND THE SCENES: Drop support for Python 3.7 (if you haven't already)
- [ ] BEHIND THE SCENES: new arg for `adapter.execute()`
- [ ] BEHIND THE SCENES: ensure support for revamped `dbt debug``
- [ ] BEHIND THE SCENES: Add support for new/modified relevant tests

the next minor release: 1.7.0

FYI, dbt-core==1.7.0 is expected to be released on October 12, 2023 in time for Coalesce, the annual analytics engineering conference!

Getting Error ORA-30926 when using snapshot functionality

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I'd like to historize my changes with the dbt snapshot feature and get the error ORA-30926.
I was able to reproduce the error by checking the dbt logs and the scripts that were run.
The problem is the generation of the dbt_unique_key using the ORA_HASH function. I repeatedly tried generating a unique key with different types of concatenated values but I keep getting duplicates.

Expected Behavior

There should be a function in Oracle to generate a reliable unique key to be able to use it in the subsequent merge statement.

Steps To Reproduce

run a sql statement on a large amount of data e.g. 400k records with a unique key and apply the ORA_HASH function to the key.

example:
SELECT count(*), count(DISTINCT prod_hash)
FROM
(SELECT prod_id, ORA_HASH(prod_id||'-'||sysdate) prod_hash
FROM product)

Expected result should be the same amount but differs

Relevant log output using --debug flag enabled

No response

Environment

- OS: windows10
- Python: 3.10
- dbt: 1.4.0

What Oracle database version are you using dbt with?

19c

Additional Context

No response

[Bug] <DPY-6005 Database Error>

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Hi, getting the following error:


dbt was unable to connect to the specified database.
The database returned the following error:

Database Error
DPY-6005: cannot connect to database.
timed out


Same error when trying different connection methods: TNS/ Using connect string /using database hostname.
no issue with hostname resolving, any clarity and causation for this error ?, Thanks

Expected Behavior

No response

Steps To Reproduce

No response

Relevant log output using --debug flag enabled

No response

Environment

- OS:Ubuntu
- Python:3.8.13
- dbt:oracle: 1.5.3

What Oracle database version are you using dbt with?

No response

Additional Context

No response

[Feature] parallel create table statements

Describe the Feature

How can I specify parallel hints for the CREATE TABLE (default materializations) which dbt-oracle is creating? If these are not yet part of dbt-oracle - do I need to run a full-blown custom materialization or is there an easier path forward?

How about specifying options on CREATE TABLE such as compression? I.e. : compress for query high?

Describe alternatives you've considered

No response

Who will this benefit?

No response

Anything else?

No response

[Bug] "Oracle error: ORA-00972: identifier is too long" with view materialization

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Related to #62.

I have a model with 20 characters, materialized as a view. On dbt build,

{% macro oracle__drop_relation(relation) -%}
is executed, trying to drop a non existent view, which happens to be 32 characters long:

DECLARE
     dne_942    EXCEPTION;
     PRAGMA EXCEPTION_INIT(dne_942, -942);
     attempted_ddl_on_in_use_GTT EXCEPTION;
     pragma EXCEPTION_INIT(attempted_ddl_on_in_use_GTT, -14452);
  BEGIN
     SAVEPOINT start_transaction;
     EXECUTE IMMEDIATE 'DROP view ********.********************__dbt_backup cascade constraint';
     COMMIT;
  EXCEPTION
     WHEN attempted_ddl_on_in_use_GTT THEN
        NULL; -- if it its a global temporary table, leave it alone.
     WHEN dne_942 THEN
        NULL;
  END;

This fails on Oracle 11g but the funny thing is the code wouldn't have any effect as there's no backup view to delete.

For now, I've overriden the macro's content to:

{% macro oracle__drop_relation(relation) -%}
  {% if relation.type != "view" %}
    {% call statement('drop_relation', auto_begin=False) -%}
    DECLARE
        dne_942    EXCEPTION;
        PRAGMA EXCEPTION_INIT(dne_942, -942);
        attempted_ddl_on_in_use_GTT EXCEPTION;
        pragma EXCEPTION_INIT(attempted_ddl_on_in_use_GTT, -14452);
    BEGIN
        SAVEPOINT start_transaction;
        EXECUTE IMMEDIATE 'DROP {{ relation.type }} {{ relation }} cascade constraint';
        COMMIT;
    EXCEPTION
        WHEN attempted_ddl_on_in_use_GTT THEN
            NULL; -- if it its a global temporary table, leave it alone.
        WHEN dne_942 THEN
            NULL;
    END;
    {%- endcall %}
  {% endif %}
{% endmacro %}

It covers my specific case for now but I'm sure there's more to it so I won't submit such a PR.

Note that the problem of the "too long" identifiers was already fixed upstream for PostgreSQL (dbt-labs/dbt-core#2869). It would be very appreciated if something like that could be done for Oracle too.

Expected Behavior

I should be able to use any name length that's acceptable for the DB.

Steps To Reproduce

Create a 20 characters long model name, materialized as a view and build it multiple times.

Relevant log output using --debug flag enabled

No response

Environment

- OS: Windows 
- Python: 3.11
- dbt: 1.4.2

What Oracle database version are you using dbt with?

11g

Additional Context

No response

[Bug] "Compilation error" on second run. "When searching for a relation, dbt found an approximate match. Instead of guessing which relation to use, dbt will move on."

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I am a beginner, trying to make a minimal working project using real data from an Oracle db that I have access to. My project consists of only a single model, which simply selects two columns from an existing table in my Oracle db.

the model is called DBT_TEST_DM_SUBSET.sql

its contents are simply this:
SELECT MATERIAL_NUMBER_DESC, MATERIAL_GROUP_DESC FROM FCDR_V6_DM_MVW

FCDR_V6_DM_MVW is a table (actually a materialized view) in a schema called EXPLORER.

My project's name is "dan_test". When I run this at the command line:
dbt run --select dan_test

the first time everything is ok, and the view called DBT_TEST_DM_SUBSET is correctly created in my oracle db.
The second time I run dbt run, I get this error:
image

Using flag "--full-refresh" does not help.
I am aware this error often has to do with lower/upper casing. However, in my profiles.yml, my schema is also all-caps: EXPLORER . The model-defining file name itself is also in all-caps: DBT_TEST_DM_SUBSET.sql

What could be causing this error? Because I am a raw beginner, I have no way of knowing if this is a bug, or if I have configured something incorrectly.

I am on the brand-new version of dbt-oracle (1.3.0), but I also had the same problem before updating.

Expected Behavior

No response

Steps To Reproduce

No response

Relevant log output using --debug flag enabled

No response

Environment

- OS: macOS 12.6
- Python: 3.8
- dbt: 1.3.0

What Oracle database version are you using dbt with?

12c

Additional Context

No response

upgrade to support dbt-core v1.2.0

We've just published the release cut of dbt-core 1.2.0, dbt-core 1.2.0rc1 (PyPI | GitHub release notes).

dbt-labs/dbt-core#5468 is an open discussion with more detailed information, and dbt-labs/dbt-core#5474 is for keeping track of the communities progress on releasing 1.2.0

Below is a checklist of work that would enable a successful 1.2.0 release of your adapter.

  • migrate necessary cross-db macros into adapter and ensure they're tested accordingly
  • remove any copy-and-pasted materialization (if your adapter inherits from another adapter)
  • add new basic tests BaseDocsGenerate and BaseDocsGenReferences
  • consider checking and testing support for Python 3.10

dbt-labs/dbt-core#5432 might make it into the second release cut in the next week, in which case, you'll also might want to:

  • implement method and tests for connection retry logic

[Feature] Adding persist_docs to dbt-oracle

Describe the Feature

I would like to see this feature implemented to dbt-oracle as I see the great benefit in db table and column comments being created from yml model descriptions.

Describe alternatives you've considered

No response

Who will this benefit?

No response

Anything else?

No response

[Bug] DBT is trying to create table for incremental model but it exists already

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

We have an incremental model and when running dbt the run fails with ORA-000955 Object with name already exists.

Expected Behavior

DBT should see that the table already exists

Steps To Reproduce

Can provide you our code for review

Relevant log output using --debug flag enabled

No response

Environment

- OS: Windows
- Python: 3.11
- dbt: 1.6

What Oracle database version are you using dbt with?

No response

Additional Context

No response

[Bug] Seed fails, if a column has no values

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

When a seed file (csv) has a column with no values, the dbt seed will fail with

max() arg is an empty sequence

Expected Behavior

The seeding is successful

  • the table is created with value-less column
  • data is inserted correctly with value-less column having NULL values

Steps To Reproduce

  1. Add a CSV file in seeds/some_data.csv
    field_1,field_2,field_3
    1,2,
    4,5,
    
  2. Add following to dbt_project.yml
    seeds:
      dbt_project:
        some_data:
          +column_types:
            field_1: number
            field_2: number
            field_3: number
    
  3. Run dbt seed

Relevant log output using --debug flag enabled

17:52:01  While listing relations in database=ORCL, schema=dbt, found: 
17:52:01  Timing info for seed.dbt_project.some_data (execute): 19:52:00.747684 => 19:52:01.390782
17:52:01  On seed.dbt_project.some_data: ROLLBACK
17:52:01  On seed.dbt_project.some_data: Close
17:52:01  Unhandled error while executing 
max() arg is an empty sequence
17:52:01  Traceback (most recent call last):
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/task/base.py", line 388, in safe_run
    result = self.compile_and_execute(manifest, ctx)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/task/base.py", line 337, in compile_and_execute
    result = self.run(ctx.node, manifest)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/task/base.py", line 436, in run
    return self.execute(compiled_node, manifest)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/task/run.py", line 291, in execute
    result = MacroGenerator(
             ^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/clients/jinja.py", line 330, in __call__
    return self.call_macro(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/clients/jinja.py", line 257, in call_macro
    return macro(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/jinja2/runtime.py", line 763, in __call__
    return self._invoke(arguments, autoescape)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/jinja2/runtime.py", line 777, in _invoke
    rv = self._func(*arguments)
         ^^^^^^^^^^^^^^^^^^^^^^
  File "<template>", line 94, in macro
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/jinja2/sandbox.py", line 393, in call
    return __context.call(__obj, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/jinja2/runtime.py", line 298, in call
    return __obj(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/clients/jinja.py", line 330, in __call__
    return self.call_macro(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/clients/jinja.py", line 257, in call_macro
    return macro(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/jinja2/runtime.py", line 763, in __call__
    return self._invoke(arguments, autoescape)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/jinja2/runtime.py", line 777, in _invoke
    rv = self._func(*arguments)
         ^^^^^^^^^^^^^^^^^^^^^^
  File "<template>", line 21, in macro
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/jinja2/sandbox.py", line 393, in call
    return __context.call(__obj, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/jinja2/runtime.py", line 298, in call
    return __obj(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/clients/jinja.py", line 330, in __call__
    return self.call_macro(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/clients/jinja.py", line 257, in call_macro
    return macro(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/jinja2/runtime.py", line 763, in __call__
    return self._invoke(arguments, autoescape)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/jinja2/runtime.py", line 777, in _invoke
    rv = self._func(*arguments)
         ^^^^^^^^^^^^^^^^^^^^^^
  File "<template>", line 44, in macro
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/jinja2/sandbox.py", line 393, in call
    return __context.call(__obj, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/jinja2/runtime.py", line 298, in call
    return __obj(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/adapters/base/impl.py", line 960, in convert_type
    return cls.convert_agate_type(agate_table, col_idx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/adapters/base/impl.py", line 975, in convert_agate_type
    return func(agate_table, col_idx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/my/dbt/project/.venv/lib/python3.11/site-packages/dbt/adapters/oracle/impl.py", line 109, in convert_text_type
    max_len = max(lens) if lens else 64
              ^^^^^^^^^
ValueError: max() arg is an empty sequence

Environment

- OS: macOS 13.5
- Python: Python 3.11.1
- dbt: 1.5.3
- dbt-oracle: 1.5.2

What Oracle database version are you using dbt with?

19c

Additional Context

No response

upgrade to support dbt-core v1.4.0

Background

The latest version of dbt Core,dbt-core==1.4.0, was published on January 25, 2023 (PyPI | Github). In fact, a patch, dbt-core==1.4.1 (PyPI | Github), was also released on the same day.

How to upgrade

dbt-labs/dbt-core#6624 is an open discussion with more detailed information. If you have questions, please put them there! dbt-labs/dbt-core#6849 is for keeping track of the community's progress on releasing 1.4.0

The above linked guide has more information, but below is a high-level checklist of work that would enable a successful 1.4.0 release of your adapter.

  • support Python 3.11 (only if your adapter's dependencies allow)
  • Consolidate timestamp functions & macros
  • Replace deprecated exception functions
  • Add support for more tests

the next minor release: 1.5.0

FYI, dbt-core==1.5.0 is expected to be released at the end of April. Please plan on allocating a more effort to upgrade support compared to previous minor versions. Expect to hear more in the middle of April.

At a high-level expect much greater adapter test coverage (a very good thing!), and some likely heaving renaming and restructuring as the API-ification of dbt-core is now well underway. See https://github.com/dbt-labs/dbt-core/milestone/82 for more information.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.