Git Product home page Git Product logo

dbt_mixpanel's Introduction

Mixpanel dbt Package (Docs)

📣 What does this dbt package do?

  • Produces modeled tables that leverage Mixpanel data from Fivetran's connector. It uses the Mixpanel event table in the format described by this ERD.

  • Enables you to better understand user activity and retention through your event data. It:

    • Creates both a daily and monthly timeline of each type of event, complete with metrics about user activity, retention, resurrection, and churn
    • Aggregates events into unique user sessions, complete with metrics about event frequency and any relevant fields from the session's first event
    • Provides a macro to easily create an event funnel
    • De-duplicates events according to best practices from Mixpanel
    • Pivots out custom event properties from JSONs into an enriched events table
  • Generates a comprehensive data dictionary of your source and modeled Mixpanel data through the dbt docs site. The following table provides a detailed list of all models materialized within this package by default.

TIP: See more details about these models in the package's dbt docs site.

Model Description
mixpanel__event Each record represents a de-duplicated Mixpanel event. This includes the default event properties collected by Mixpanel, along with any declared custom columns and event-specific properties.
mixpanel__daily_events Each record represents a day's activity for a type of event, as reflected in user metrics. These include the number of new, repeat, and returning/resurrecting users, as well as trailing 7-day and 28-day unique users.
mixpanel__monthly_events Each record represents a month of activity for a type of event, as reflected in user metrics. These include the number of new, repeat, returning/resurrecting, and churned users, as well as the total active monthly users (regardless of event type).
mixpanel__sessions Each record represents a unique user session, including metrics reflecting the frequency and type of actions taken during the session and any relevant fields from the session's first event.

🎯 How do I use the dbt package?

Step 1: Prerequisites

To use this dbt package, you must have the following:

  • At least one Fivetran Mixpanel connector syncing data into your destination.
  • A BigQuery, Snowflake, Redshift, PostgreSQL, or Databricks destination.

Databricks dispatch configuration

If you are using a Databricks destination with this package, you must add the following (or a variation of the following) dispatch configuration within your dbt_project.yml. This is required in order for the package to accurately search for macros within the dbt-labs/spark_utils then the dbt-labs/dbt_utils packages respectively.

dispatch:
  - macro_namespace: dbt_utils
    search_order: ['spark_utils', 'dbt_utils']

Database Incremental Strategies

Many of the end models in this package are materialized incrementally, so we have configured our models to work with the different strategies available to each supported warehouse.

For BigQuery and Databricks destinations, we have chosen insert_overwrite as the default strategy, which benefits from the partitioning capability.

For Snowflake, Redshift, and Postgres databases, we have chosen delete+insert as the default strategy.

Regardless of strategy, we recommend that users periodically run a --full-refresh to ensure a high level of data quality.

Step 2: Install the package

Include the following mixpanel package version in your packages.yml file:

TIP: Check dbt Hub for the latest installation instructions or read the dbt docs for more information on installing packages.

packages:
  - package: fivetran/mixpanel
    version: [">=0.9.0", "<0.10.0"] # we recommend using ranges to capture non-breaking changes automatically

Step 3: Define database and schema variables

By default, this package runs using your destination and the mixpanel schema. If this is not where your Mixpanel data is (for example, if your Mixpanel schema is named mixpanel_fivetran), add the following configuration to your root dbt_project.yml file:

vars:
    mixpanel_database: your_database_name
    mixpanel_schema: your_schema_name 

(Optional) Step 4: Additional configurations

Macros

analyze_funnel (source)

You can use the analyze_funnel(event_funnel, group_by_column, conversion_criteria) macro to produce a funnel between a given list of event types.

It returns the following:

  • The number of events and users at each step
  • The overall user and event conversion % between the top of the funnel and each step
  • The relative user and event conversion % between subsequent steps

Note: The relative order of the steps is determined by their event volume, not the order in which they are input.

The macro takes the following as arguments:

  • event_funnel: List of event types (not case sensitive).
    • Example: '['play_song', 'stop_song', 'exit']
  • group_by_column: (Optional) A column by which you want to segment the funnel (this macro pulls data from the mixpanel__event model). The default value is None.
    • Example: group_by_column = 'country_code'.
  • conversion_criteria: (Optional) A WHERE clause that will be applied when selecting from mixpanel__event.
    • Example: To limit all events in the funnel to the United States, you'd provide conversion_criteria = 'country_code = "US"'. To limit the events to only song play events to the US, you'd input conversion_criteria = 'country_code = "US"' OR event_type != 'play_song'.

Pivoting Out Event Properties

By default, this package selects the default columns collected by Mixpanel. However, you likely have custom properties or columns that you'd like to include in the mixpanel__event model.

If there are properties in the mixpanel.event.properties JSON blob that you'd like to pivot out into columns, add the following variable to your dbt_project.yml file:

vars:
  mixpanel:
    event_properties_to_pivot: ['the', 'list', 'of', 'property', 'fields'] # Note: this is case-SENSITIVE and must match the casing of the property as it appears in the JSON

Passthrough Columns

Additionally, this package includes all standard source EVENT columns defined in the staging_columns macro. You can add more columns using our passthrough column variables. These variables allow the passthrough fields to be aliased (alias) and casted (transform_sql) if desired, although it is not required. Data type casting is configured via a SQL snippet within the transform_sql key. You may add the desired SQL snippet while omitting the as field_name part of the casting statement - this will be dealt with by the alias attribute - and your custom passthrough fields will be casted accordingly.

Use the following format for declaring the respective passthrough variables:

vars:
  mixpanel:
    event_custom_columns:
      - name:           "property_field_id"
        alias:          "new_name_for_this_field_id"
        transform_sql:  "cast(property_field_id as int64)"
      - name:           "this_other_field"
        transform_sql:  "cast(this_other_field as string)"

Sessions Event Frequency Limit

The event_frequencies field within the mixpanel__sessions model reports all event types and the frequency of those events as a JSON blob via a string aggregation. For some users there can be thousands of different event types that take place. For Redshift and Postgres warehouses there currently exists a limit for string aggregations (up to 65,535). As a result, in order for Redshift and Postgres users to still leverage the event_frequencies field, an artificial limit is applied to this field of 1,000. If you would like to adjust this limit, you may do so by modifying the below variable in your project configuration.

vars:
  mixpanel:
    mixpanel__event_frequency_limit: 500 ## Default is 1000

Event Date Range

Because of the typical volume of event data, you may want to limit this package's models to work with a recent date range of your Mixpanel data (however, note that all final models are materialized as incremental tables).

By default, the package looks at all events since January 1, 2010. To change this start date, add the following variable to your dbt_project.yml file:

vars:
  mixpanel:
    date_range_start: 'yyyy-mm-dd' 

Note: This date range will not affect the number_of_new_users column in the mixpanel__daily_events or mixpanel__monthly_events models. This metric will be true new users.

Global Event Filters

In addition to limiting the date range, you may want to employ other filters to remove noise from your event data.

To apply a global filter to events (and therefore all models in this package), add the following variable to your dbt_project.yml file. It will be applied as a WHERE clause when selecting from the source table, mixpanel.event.

vars:
  mixpanel:
    # Ex: removing internal user
    global_event_filter: 'distinct_id != "1234abcd"'

Session Configurations

Session Inactivity Timeout

This package sessionizes events based on the periods of inactivity between a user's events on a device. By default, the package will denote a new session once the period between events surpasses 30 minutes.

To change this timeout value, add the following variable to your dbt_project.yml file:

vars:
  mixpanel:
    sessionization_inactivity: number_of_minutes # ex: 60

Session Pass-Through Columns

By default, the mixpanel__sessions model will contain the following columns from mixpanel__event:

  • people_id: The ID of the user
  • device_id: The ID of the device they used in this session
  • event_frequencies: A JSON of the frequency of each event_type in the session

To pass through any additional columns from the events table to mixpanel__sessions, add the following variable to your dbt_project.yml file. The value of each field will be pulled from the first event of the session.

vars:
  mixpanel:
    session_passthrough_columns: ['the', 'list', 'of', 'column', 'names'] 

Session Event Criteria

In addition to any global event filters, you may want to disclude events or place filters on them in order to qualify for sessionization.

To apply any filters to the events in the sessions model, add the following variable to your dbt_project.yml file. It will be applied as a WHERE clause when selecting from mixpanel__event.

vars:
  mixpanel:

    # ex: limit sessions to include only these kinds of events
    session_event_criteria: 'event_type in ("play_song", "stop_song", "create_playlist")'

Lookback Window

Events can sometimes arrive late. For example, events triggered on a mobile device that is offline will be sent to Mixpanel once the device reconnects to wifi or a cell network. Since many of the models in this package are incremental, by default we look back 7 days to ensure late arrivals are captured while avoiding requiring a full refresh. To change the default lookback window, add the following variable to your dbt_project.yml file:

vars:
  mixpanel:
    lookback_window: number_of_days # default is 7

Changing the Build Schema

By default this package will build the Mixpanel staging models within a schema titled (<target_schema> + _stg_mixpanel) and Mixpanel final models within a schema titled (<target_schema> + mixpanel) in your target database. If this is not where you would like your modeled Mixpanel data to be written to, add the following configuration to your dbt_project.yml file:

models:
    mixpanel:
      +schema: my_new_schema_name # leave blank for just the target_schema
      staging:
        +schema: my_new_schema_name # leave blank for just the target_schema

Change the source table references

If an individual source table has a different name than the package expects, add the table name as it appears in your destination to the respective variable:

IMPORTANT: See this project's dbt_project.yml variable declarations to see the expected names.

vars:
    mixpanel_<default_source_table_name>_identifier: your_table_name 

Event De-Duplication Logic

Events are considered duplicates and consolidated by the package if they contain the same:

  • insert_id (used for de-deuplication internally by Mixpanel)
  • people_id (originally named distinct_id)
  • type of event
  • calendar date of occurrence (event timestamps are set in the timezone the Mixpanel project is configured to)

This is performed in line with Mixpanel's internal de-duplication process, in which events are de-duped at the end of each day. This means that if an event was triggered during an offline session at 11:59 PM and resent when the user came online at 12:01 AM, these records would not be de-duplicated. This is the case in both Mixpanel and the Mixpanel dbt package.

(Optional) Step 5: Orchestrate your models with Fivetran Transformations for dbt Core™

Expand for details

Fivetran offers the ability for you to orchestrate your dbt project through Fivetran Transformations for dbt Core™. Learn how to set up your project for orchestration through Fivetran in our Transformations for dbt Core setup guides.

🔍 Does this package have dependencies?

This dbt package is dependent on the following dbt packages. Please be aware that these dependencies are installed by default within this package. For more information on the following packages, refer to the dbt hub site.

IMPORTANT: If you have any of these dependent packages in your own packages.yml file, we highly recommend that you remove them from your root packages.yml to avoid package version conflicts.

packages:
    - package: fivetran/fivetran_utils
      version: [">=0.4.0", "<0.5.0"]

    - package: dbt-labs/dbt_utils
      version: [">=1.0.0", "<2.0.0"]

🙌 How is this package maintained and can I contribute?

Package Maintenance

The Fivetran team maintaining this package only maintains the latest version of the package. We highly recommend you stay consistent with the latest version of the package and refer to the CHANGELOG and release notes for more information on changes across versions.

Contributions

A small team of analytics engineers at Fivetran develops these dbt packages. However, the packages are made better by community contributions!

We highly encourage and welcome contributions to this package. Check out this dbt Discourse article on the best workflow for contributing to a package!

🏪 Are there any resources available?

  • If you have questions or want to reach out for help, please refer to the GitHub Issue section to find the right avenue of support for you.
  • If you would like to provide feedback to the dbt package team at Fivetran or would like to request a new dbt package, fill out our Feedback Form.
  • Have questions or want to just say hi? Book a time during our office hours on Calendly or email us at [email protected].

dbt_mixpanel's People

Contributors

dylanbaker avatar fivetran-catfritz avatar fivetran-chloe avatar fivetran-dejantucakov avatar fivetran-jamie avatar fivetran-joemarkiewicz avatar fivetran-markoteodorovic avatar fivetran-reneeli avatar fivetran-sheringuyen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbt_mixpanel's Issues

[Feature] Postgres compatibility

Is there an existing feature request for this?

  • I have searched the existing issues

Describe the Feature

Postgres compatibility

Describe alternatives you've considered

No response

Are you interested in contributing this feature?

  • Yes.
  • Yes, but I will need assistance and will schedule time during your office hours for guidance.
  • No.

Anything else?

No response

FEATURE - dedupe users who have multiple mixpanel accounts?

so, customers can store their internal user_ids (or whatever they might call it) in an event's properties. this is different from the event's distinct_id, which is a unique identifier for the Mixpanel account. One person can get associated with multiple distinct_ids in Mixpanel, perhaps by using a different device or triggering events while signed out.

wondering if we should offer the ability to perform event-deduping and/or sessionization at the REAL user level (not the distinct_id level). will check out our sample data to see if i can find any cases where doing so would make a significant impact

Package not working on Google BigQuery

I recently upgraded to the 0.2.0 release of the package and it will no longer run for me.

However, if I run with a previous version of the package, the only change I make to the dbt project, then this runs without error in BigQuery and produces the datasets.

The error I am receiving is:

Completed with 1 error and 0 warnings:

Runtime Error in model stg_mixpanel__event_tmp (models/staging/tmp/stg_mixpanel__event_tmp.sql)
  404 GET https://bigquery.googleapis.com/bigquery/v2/projects/backend-producti-b8633498/queries/d53025ae-64dc-4846-802b-2080411adce6?maxResults=0&location=EU&prettyPrint=false: Not found: Job backend-producti-b8633498:EU.d53025ae-64dc-4846-802b-2080411adce6
  
  (job ID: d53025ae-64dc-4846-802b-2080411adce6)

Done. PASS=0 WARN=0 ERROR=1 SKIP=0 TOTAL=1

I am running all my datasets within an EU region on GCP. My profiles.yml file is configured as so (some fields redacted for security purposes):

default:
  target: default
  outputs:
    default:
      type: bigquery
      method: service-account
      project: xxxxxxxx
      threads: 1
      location: EU
      keyfile: /path/to/file/key_file.json
      timeout_seconds: 300
      priority: interactive
      schema: dbt_xxxx
      retries: 1

mixpanel__sessions throws LISTAGG limit error

Is there an existing issue for this?

  • I have searched the existing issues

Describe the issue

When running mixpanel__sessions model, redshift throws the following error:

21:45:36  Database Error in model mixpanel__sessions (models/mixpanel__sessions.sql)
21:45:36    Result size exceeds LISTAGG limit
21:45:36    DETAIL:  
21:45:36      -----------------------------------------------
21:45:36      error:  Result size exceeds LISTAGG limit
21:45:36      code:      8001
21:45:36      context:   LISTAGG limit: 65535
21:45:36      query:     933154
21:45:36      location:  string_ops.cpp:116
21:45:36      process:   query3_246_933154 [pid=3755]
21:45:36      -----------------------------------------------

Relevant error log or model output

No response

Expected behavior

The model should run without error

dbt Project configurations

# Name your project! Project names should contain only lowercase characters
# and underscores. A good package name should reflect your organization's
# name or the intended use of these models
name: "dwh"
version: "1.0.0"
config-version: 2

# This setting configures which "profile" dbt uses for this project.
profile: "curative-dwh"

# These configurations specify where dbt should look for different types of files.
# The `model-paths` config, for example, states that models in this project can be
# found in the "models/" directory. You probably won't need to change these!
model-paths: ["models"]
analysis-paths: ["analyses"]
test-paths: ["tests"]
seed-paths: ["seeds"]
macro-paths: ["macros"]
snapshot-paths: ["snapshots"]

log-path:

target-path: "target" # directory which will store compiled SQL files
clean-targets: # directories to be removed by `dbt clean`
  - "target"
  - "dbt_packages"

on-run-start:
  # TODO: create script that runs on staging to make sure that the _0cp and _prod schemas are in sync with any new landing tables
  
  # when a new schema is created, fix permissions
  # NEED TO FIGURE OUT PERMISSIONS FOR THIS -->  "{{ update_permissions_for_lnd_schemas() }}"
  
on-run-end:
  # drop any models not in dbt project. only on staging and dev (not prod and ci; too risky)
  - "{{ drop_orphaned_relations(['base', 'curated', 'aggregated'], ['mixpanel', 'fivetran_log'], False if target.name in ['staging'] else True) }}"
  - "{{ re_data.save_test_history(results) if 'prod' == target.name }}"

vars:
  force_row_limit: -1 # if want to enforce a row limit, like we do for CI env

  mixpanel:
    mixpanel_database: dwh
    mixpanel_schema: lnd_mixpanel
    event_table: "{{ source('mixpanel', 'event') }}{{ limit_clause_by_env(10000) }}"
    date_range_start: "2022-03-30"

  greenhouse_database: dwh
  greenhouse_schema: lnd_greenhouse
  greenhouse_using_prospects: true
  greenhouse_using_eeoc: true
  greenhouse_using_app_history: false
  greenhouse_using_job_office: true
  greenhouse_using_job_department: true

  fivetran_log:
    fivetran_log_database: dwh
    fivetran_log_schema: lnd_fivetran
    fivetran_log_using_transformations: false # this will disable all transformation + trigger_table logic
    fivetran_log_using_triggers: false # this will disable only trigger_table logic
    log: "{{ source('fivetran_log', 'log') }}{{ limit_clause_by_env() }}"

models:
  +bind: false # Materialize all views as late-binding
  +post-hook:
    - "{{ redshift.compress_table(this.schema, this.table, drop_backup=False) if this.type == 'table' }}"
    - "{{ test_late_binding_view() }}"

  re_data:
    +enabled: "{{ 'prod' == target.name }}"
    +schema: base
    internal:
      +schema: base

  redshift:
    +bind: true
    +schema: base

  greenhouse_source:
    +schema: base

  mixpanel:
    +schema: base
    staging:
      +schema: base
      +docs:
        show: False

  fivetran_log:
    +schema: base
    staging:
      +schema: base
      +docs:
        show: False

  dwh:
    +materialized: view
    +re_data_monitored: true
    
    base:
      +schema: base
      
      app_public:
        staging:
          +re_data_monitored: false
          +materialized: ephemeral
          +docs:
            show: False
          
      revcycle:
        staging:
          +materialized: ephemeral
          +re_data_monitored: false

      ups:
        +enabled: False # busted for now
      safegraph:
        +enabled: False # need to setup
      rd:
        +enabled: False # need to setup

      twilio:
        twilio_call:
          +enabled: False
        twilio_role:
          +enabled: False
        twilio_role_permission:
          +enabled: False
        twilio_service:
          +enabled: False

    curated:
      +schema: curated
      core:
        cur_core_safegraph_visits_by_home_cbg:
          +enabled: False # need to setup

    aggregated:
      +schema: aggregated

sources:
  greenhouse_source:
    greenhouse:
      application_history:
        +enabled: False

  #dwh:
  #  landing:
  #    src_quadax:
  #      +re_data_monitored: true
  #    src_app_public:
  #      +re_data_monitored: false

seeds:
  +schema: base

tests:
  fivetran_log:
    +enabled: False

Package versions

  - package: dbt-labs/redshift
    version: 0.6.0

  - package: dbt-labs/dbt_utils
    version: 0.8.0

  - package: dbt-labs/dbt_external_tables
    version: 0.8.0

  - package: dbt-labs/codegen
    version: 0.5.0

  - package: fivetran/fivetran_utils
    version: 0.3.4

  - package: fivetran/fivetran_log
    version: 0.5.2

What database are you using dbt with?

redshift

dbt Version

1.0

Additional Context

No response

Are you willing to open a PR to help address this issue?

  • Yes.
  • Yes, but I will need assistance and will schedule time during our office hours for guidance
  • No.

[Bug] partition_by config parameter doesn't work with Snowflake

Is there an existing issue for this?

  • I have searched the existing issues

Describe the issue

In the mixpanel__<table> data models, the config block is set-up to use the partition_by which is not a recognized parameter for snowflake. For clustering to work in Snowflake, you need to use the cluster_by parameter.

The current value of partition_by={'field': 'date_day', 'data_type': 'date'} if target.type not in ('spark','databricks') else ['date_day'], has some logic that does use different values for databricks vs all other platforms. The issue is that Snowflake just ignores this parameter completely.

To fix this issue, I followed the instruction of disabling the base dbt_mixpanel version of the model and writing a copy of the model with the correct config block.

Here's the config block that ended up working for me. Without having other platforms to test, I do not know if this would work. I assume Databricks needs partition_by while Snowflake needs cluster_by.

{{
    config(
        materialized='incremental',
        unique_key='unique_event_id',
        cluster_by="date_day",
        incremental_strategy = 'delete+insert',
    )
}}

Relevant error log or model output

Using the default parameters on Snowflake, this the last few lines of the of the compiled SQL.



    from dedupe

)

select * from pivot_properties


### Expected behavior

Using the correct Snowflake parameters, this is the last few lines of the compile SQL.
from dedupe

)

select * from pivot_properties
) order by (date_day)
);
alter table dbt.int_mixpanel__event cluster by (date_day);


### dbt Project configurations

config-version: 2
name: 'mixpanel'
version: '0.8.0'
require-dbt-version: [">=1.3.0", "<2.0.0"]

### Package versions

  - package: fivetran/mixpanel
    version: 0.8.0

### What database are you using dbt with?

snowflake

### dbt Version

- installed: 1.6.2

### Additional Context

_No response_

### Are you willing to open a PR to help address this issue?

- [ ] Yes.
- [ ] Yes, but I will need assistance and will schedule time during our [office hours](https://calendly.com/fivetran-solutions-team/fivetran-solutions-team-office-hours) for guidance
- [X] No.

Any chance of supporting Postgres?

Hey Fivetran team,

Been loving Fivetran connectors and dbt transformers on top of those connectors. Set up our Stripe to use them and have very clean data that we can use in our dashboards.

Wanted to do the same for our Mixpanel data but unfortunately, this package doesn't work for Postgres. Looking at it, there seem to be only a few things that make this incompatible with Postgres like how the interval is formatted and using alias after FROM. Any chance you'd support Postgres, please?

Thanks for considering,
Nikhil

[Bug] Breaking change in stg_mixpanel___events.sql event table source

Is there an existing issue for this?

  • I have searched the existing issues

Describe the issue

The introduction of the hardcoded source table in commit 1b66f95 [https://github.com/fivetran/dbt_mixpanel/commit/1b66f95022f7e37c78863441b1fa66b48f36f8d8#diff-06f26e4fc8545509412ed16924c7ffacb49b0057ad1f6bb83d75ba1568197428] for events introduced issues for users that used the variable event_table in their dbt_project.yml config.
image

Relevant error log or model output

No response

Expected behavior

The source event table can be determined by the global var event_table in the dbt_project.yml

dbt Project configurations

vars:
  event_table: "{{ ref('stg_mixpanel_union_old_new') }}"
  mixpanel__event_frequency_limit: 1000

Package versions

packages:
- package: fivetran/mixpanel
  version: [">=0.8.0", "<2.0.0"]

What database are you using dbt with?

snowflake

dbt Version

1.6

Additional Context

No response

Are you willing to open a PR to help address this issue?

  • Yes.
  • Yes, but I will need assistance and will schedule time during our office hours for guidance
  • No.

Add deduping logic within README

Following a conversation with a customer around the deduping logic within the mixpanel__events we found it would be beneficial to clarify the reasoning of our logic in the model and more importantly within the README.

Package fails if custom event properties contains spaces

I have recently started to use Mixpanel with our company and went to use this dbt package.

Using the event_properties_to_pivot configuration I have added 4 different properties, each of which contain spaces. This has cause the SQL that is executed to be incorrect and not run as the column name in the AS definition are not quoted e.g.

replace(parse_json(event_properties):Company Name, '"', '') as Company Name,
replace(parse_json(event_properties):Region Code, '"', '') as Region Code,

Environment:
dbt environment - dbt cloud
destination - Snowflake DB
Package version - 0.1.0

datetime/timestamp issue

From email:

Hi Guys,

Just another thing that is hampering me right now. I am using the Mixpanel dbt package from Fivetran, but I am running into and error when using it:

Looking at the executed code this is the part that is causing the error:
occurred_at >= coalesce((select cast( max(date_day) as
timestamp

Looking at the source schema, which is “stg_mixpanel__event_tmp” the “time” column, which is renamed in a CTE, is of type DATETIME. This means that we are trying to do a “‘DATETIME' >= ’TIMESTAMP’” which isn’t allowed in BigQuery.

And then looking at the code in GitHub this appears to be the effected line:

occurred_at >= coalesce((select cast( max(date_day) as {{ dbt_utils.type_timestamp() }} ) from {{ this }} ), '2010-01-01')

I am unsure myself how to fix this since this code will run across multiple database systems.

Can this be raised with your dbt team please? Unfortunately this is really blocking me right now, so going to have to find a workaround

Thanks,

Craig

[Feature] Databricks Compatibility

Is there an existing feature request for this?

  • I have searched the existing issues

Describe the Feature

For Databricks Compatibility, add the following:

  1. Buildkite testing:
    • Update pre-command (example)
    • Update pipeline.yml (example)
    • Update sample.profiles.yml (example)
    • Add the below to integration_tests/dbt_project.yml if it's not there:
dispatch:
  - macro_namespace: dbt_utils
    search_order: ['spark_utils', 'dbt_utils']
  1. For source packages, update src yml so a database won't be passed to spark (example or use below):
sources: 
  - name: <name>
    database: "{% if target.type != 'spark' %}{{ var('<name>_database', target.database) }}{% endif %}"
  1. Update any incremental models to update partition_by for databricks and add current strategies if not present:
config(
        materialized='incremental',
        unique_key='<original unique key>',
        partition_by={'field': '<original field>', 'data_type': '<original data type>'} if target.type not in ('spark','databricks') else ['<original field>'],
        incremental_strategy = 'merge' if target.type not in ('postgres', 'redshift') else 'delete+insert',
        file_format = 'delta' 
)

Describe alternatives you've considered

No response

Are you interested in contributing this feature?

  • Yes.
  • Yes, but I will need assistance and will schedule time during your office hours for guidance.
  • No.

Anything else?

No response

[Feature] Improve run time of package with additional ftiler

Is there an existing feature request for this?

  • I have searched the existing issues

Describe the Feature

When running within BigQuery I noticed that the model stg_mixpanel__event_tmp did not apply the same filter that the mixpanel__event model does i.e.:

where time >= {{ "'" ~ var('date_range_start',  '2010-01-01') ~ "'" }}

Applying this filter in the view can significantly reduce the resource requirements for running the model. When running in BigQuery for our own data, a run of the original code would process 4.45TB of data. Applying this filter, with the configuration below in our dbt_project.yml file, changed the processing to 276GB, which is a drastic improvement.

This is especially relevant when running in development, where you can use jinja to limit the number of records when running in development, which can greatly improve the time to test etc. e.g.

vars:
   mixpanel:
      date_range_start: "{{ '2019-11-01' if target.name == 'production' else (modules.datetime.date.today() - modules.datetime.timedelta(7)) }}"

I believe applying the same variable based filtering in the temporary table, when this is used as the source of the mixpanel__event model is required to ensure the same amount of data is processed throughout the model.

Describe alternatives you've considered

No response

Are you interested in contributing this feature?

  • Yes.
  • Yes, but I will need assistance and will schedule time during your office hours for guidance.
  • No.

Anything else?

No response

[Feature] Update README

Is there an existing feature request for this?

  • I have searched the existing issues

Describe the Feature

The README needs to updated to the current format.

Describe alternatives you've considered

No response

Are you interested in contributing this feature?

  • Yes.
  • Yes, but I will need assistance and will schedule time during your office hours for guidance.
  • No.

Anything else?

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.