Git Product home page Git Product logo

techmitchh / postgres_datamodeling Goto Github PK

View Code? Open in Web Editor NEW
1.0 1.0 0.0 569 KB

In this project, I'll apply what I've learned on data modeling with Postgres and build an ETL pipeline using Python. To complete the project, I will define the fact and dimension tables for a star schema for a particular analytic focus, and write an ETL pipeline that transfers data from files in two local directories into these tables in Postgres using Python and SQL.

Python 17.14% Jupyter Notebook 82.86%
sql postgres python etl-pipeline schemadesign dimension-tables fact-table datamodeling jupyter-notebook starschema

postgres_datamodeling's Introduction

Project: Data Modeling with Python and PostgreSQL


Overview

Sparkify wants to analyze the data they've been collecting on songs and user activity on their new music streaming app. A star schema has been implemented to provide simple queries to aggregate data quickly and effectively. The current ETL pipeline will allow the Sparkify team to add Json files to the existing folder and query that information quickly.

In this project, I'll apply what I've learned on data modeling with Postgres and build an ETL pipeline using Python. To complete the project, I will define the fact and dimension tables for a star schema for a particular analytic focus, and write an ETL pipeline that transfers data from files in two local directories into these tables in Postgres using Python and SQL.


Features

  • Creates songplay fact table.
  • Creates (user, song, artist, and time) dimension tables.
  • Iterrates through multiple CSVs and extracts song and log data.
  • Transforms and clean to prepare for table insertion.
  • Implements a song_select query that finds and matches song_id and artist_id based on song title, artist name, and duration of song.
  • Inserts data into respective Postgres database table.

Running the project

Files in respository:

  • create_tables.py: Establishes connection to sparkify db creates db, drops tables (if exits), creates tables
  • sql_queries: Provides sql syntax to drop tables, create fact and dimension table, insert records, find songs, and query lists.
  • etl.ipynb: Provides a testing environment to building ETL pipeline.
  • etl.py: Production ready ETL pipeline. Runs necessary functions that Extracts, Transforms, and Loads data into Sparkify db. Code has been copied from etl.ipynb.
  • test.ipynb: Runs queries to test data has been inserted into sparkify db.
  • log_data, song_data folders: holds the data necessary for the Sparkify project.

You will not be able to run test.ipynb, etl.ipynb, or etl.py until you have run create_tables.py at least once to create the sparkifydb database, which these other files connect to.


Dependencies

  • postgresql database
Use the following Python libraries:
  • os
  • glob
  • psycopg2
  • pandas as pd
  • from sql_queries import *

Entity Relationship Diagram

SparkifyERD

postgres_datamodeling's People

Contributors

techmitchh avatar

Stargazers

 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.