Git Product home page Git Product logo

01-twitter-crud's Introduction

Twitter database

🎯 The goal of this exercise is to create

  • a Twitter database using alembic, containing a tweets and a users table
  • and a functioning front-end with fastapi

1️⃣ Setup: Creating the database with psql

❓ Instruction (expand me)

❓ Lets create a new database in our postgres called twitter

Quickly make a db
createdb twitter

Now lets set up the .env by cp .env.sample .env so that you have an url ready to be used

POSTGRES_PASSWORD = mypassword
POSTGRES_DATABASE_URL = postgresql+psycopg2://$USER:$POSTGRES_PASSWORD@localhost:5432/twitter

🔎 You might notice the additional +psycopg2 at the beginning of our string we did not have yesterday, here we are just defining the package that sqlalchemy should use to connect to the database.

Then connect to our new database through dbeaver like we did yesterday!

❗️ Now we are ready to start creating our python files!

2️⃣ Python connection with sqlalchemy

❓ Instruction (expand me)

We have setup a file for you at twitter_api/database.py which has what we need to connect to the database from our api. Try to read it and understand what we have created. Don't forget to setup your VS code interpreter to the poetry env of the day (interpreter path: which python)

1️⃣ First we import the necessary imports from sqlalchemy.

import os
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker

2️⃣ Next we get our databse url string from the environment variable

DB_URL = os.environ.get("POSTGRES_DATABASE_URL")

3️⃣ Now we create an engine, which is the point at which we are most abstracted away from the database. For us it is the initial point of connection.

engine = create_engine(DB_URL)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

5️⃣ Lets test our connection you can run the file python twitter_api/database.py and if it runs successfully your connection string is allowing us to execute queries against the database!

if __name__ == "__main__":
    with SessionLocal() as db:
        print(db.execute("""
            SELECT * FROM information_schema.tables
            WHERE table_schema = 'public'
            """).fetchall())

3️⃣ Creating the users table with Alembic

❓ Instruction (expand me)

Go to twitter_api/models.py and fill up the Users class using sqlalchemy declarative mapping.

Your mapping should be the equivalent to the SQL

CREATE TABLE users (
	id serial4 PRIMARY KEY,
	email varchar NOT NULL UNIQUE,
	hashed_password varchar NOT NULL
);

Start with just the tablename and columns section.

Now that we have our table defined we are ready to use alembic to autogenerate migrations!

The initial setup of alembic is a little tricky so we have included it all here:

1️⃣ The first step is to run

alembic init alembic

This will create a default alembic folder

2️⃣ Edit the alembic/env.py file in alembic folder so as to:

  • Update target_metadata from None to twitter_api.models.Base.metadata so as to tell Alembic to monitor any changes in tables defined by the twitter_api.models module.
  • Tell alembic how to connect to your postgres database. You could update alembic.ini line 58, but that would expose your POSTGRES_PASSWORD and you want to keep it private. Instead, we'll load your challenge-folder .env file inside alembic/env.py and set postgres url from inside:
import os,
import pathlib
from dotenv import load_dotenv
env_path = pathlib.Path(__file__).resolve().parent.parent.joinpath(".env")
load_dotenv(env_path)
config.set_main_option("sqlalchemy.url", os.environ["POSTGRES_DATABASE_URL"])

3️⃣ Now our alembic should be ready setup should be ready to go, you can now run your first revision!

alembic revision --autogenerate -m "added users table"

This will generate a new file in alembic/versions describing the changes to apply to the database.

Lets have a look inside:

We can see all of the commands alembic is running to create our users table. We can edit this as much as we like, one thing that can be really useful when you create a table is to have a couple of rows added to test on as well. So lets edit our migration to add those!

Lets edit it to add these rows:

# Just save to a variable to able to apply a function to it.
users = op.create_table(...)
# Then we want to bulk insert our rows after creation.
op.bulk_insert(users,
[
    {'email':'[email protected]','hashed_password':'notreally'},
    {'email':'[email protected]','hashed_password':'notreallyeither'}
])
  1. Our new upgrade should look like this, downgrade does not need editing because deleting the table gets rid of the rows but usually you need to mirror your changes there!

You can then apply this to your (still empty) database by running:

alembic upgrade head # head means: your latest database migration status (similar to git HEAD)

👉 Go check your database, you should see a users table created with all the columns, and two roles!

❗️ Note that are some restrictions to what autogenerate will correct you can read about them here.

4️⃣ Creating the api with FastAPI+ Pydantic

❓ Instruction (expand me)

We have three steps to creating the api

1️⃣ Defining the pydantic models in schemas.py 2️⃣ Defining the functions to interact with the database in crud.py 3️⃣ Defining the endpoints in main.py

4.1 Pydantic: Create data schemas in schemas.py

If you look at the documentation, you can see that inside fields for pydantic model the types are defined with type hints (for example id: int) compared to how we defined the tables in Alembic (id = Column(Integer,...)). We saw the power they give us for our apis in the lecture so lets try and build some for our users api.

  • Alembic User() model describe the columns of the SQL users table in models.py
  • Pydantic UserBase() UserCreate() and User() models describe how we would like to send and receive data from the api when we try to fill the User classes in schemas.py. It will help validate data types, as well as generate a docstring automatically for our Fast API later on!

update schemas.py related to Users: For that, ask yourself which piece of data should be checked at all stages so it belongs in UserBase, only when you are creating the class UserCreate, and User for when we query from the database.

💡 Solution for when you are stuck or done!
class UserBase(BaseModel):
    email: str


class UserCreate(UserBase):
    password: str


class User(UserBase):
    id: int

    class Config:
        orm_mode = True

Here the main piece you were probably missing was orm_mode to allow us to leverage the models using orm patterns!

4.2 Interact with the database via Fast API without writing SQL (crud.py + main.py)

🎯 We want to create the four CRUD User functions in crud.py, and their associated FastAPI routes in main.py

1️⃣ GET/users/user_id

👉 We gave you the crud.read_user function

return db.query(models.User).filter(models.User.id == user_id).first()

Thanks to sql Alchemy, you can see how we can interact with the db without writing a single line of SQL!

👉 Now, check out main.py the scaffolding is there ready for you to start filling the API! We gave you the main.read_user function

@app.get("/users/{user_id}", response_model=schemas.User, tags=["users"])
def read_user(user_id: int, db: Session = Depends(get_db)):
    """get endpoint to read a given user"""
    db_user = crud.read_user(db, user_id=user_id)
    if db_user is None:
        raise HTTPException(status_code=404, detail="User not found")
    return db_user

🔎 The strangest thing here is db: Session = Depends(get_db).

  • Whenever a new request arrives, FastAPI will take care of calling your dependencies get_db first. It's just a way to refactor a piece of code that is shared for all api end-points.
  • get_db creates a sqlalchemy Session that will be active until the API call end: As soon as main.read_user returns, the finally condition line 24 is called and session is closed. What we are doing here is making sure every single call to the api has its own session with the db. This is important for when we have multiple users making calls to create new users at the same time!
💡 equivalent syntax without FastAPI "Depends" magic
@app.get("/users/{user_id}", response_model=schemas.User, tags=["users"])
def read_user(user_id: int):
    """get endpoint to read a given user"""
    with SessionLocal() as db:
        # "with xxx" clause always run xxx.close() when finished
        db_user = crud.read_user(db, user_id=user_id)
        if db_user is None:
            raise HTTPException(status_code=404, detail="User not found")
    return db_user

👉 Lets run the app first and check out where we are starting from:

uvicorn twitter_api.main:app --reload
  • Make sure port 8000 is forwarded and go to localhost:8000 to see your app live!
  • As we populated our db with two users in our earlier migration, you can test it out with http://localhost:8000/users/1 for instance.
  • Check also localhost:8000/docs to see some lovely documentation automatically created (This is all made possible because of the Pydantic models we defined in schema.py: FastAPI & Pydantic are working hands-in-hands to create nice docs & UI admin automatically.

2️⃣ GET/users endpoint

This time, it's your turn to code your logic in crud.read_users and then main.read_users

3️⃣ POST/users endpoint

This one is slightly more complex, because you cannot create two users that have the same email (remember your condition from model.py)

  • First, check if user already exist using crud.read_user_by_email
  • If so, raise HTTPException
  • If not, call crud.create_user to create a python instance of User within the current alchemy db Session, then add and commit it to the db.

🧪 Test your code and actually create a user using the API docs!

You can now run the users tests (feel free to check them out for code to test apis!):

make test_users

If you have 3/3

git add .
git commit -m "users done"
git push origin main

5️⃣ Tweets 🦜

❓ Instruction (expand me)

Now we want to create our tweets table, but don't worry: most of the code you already wrote a lot of patterns/logic here are repeatable! 😌

5.1 Create the table

❓ Lets return to our models.py and fill the Tweet class.

This time the sql equivalent you should be aiming for is:

CREATE TABLE tweets (
    id serial PRIMARY KEY,
    "text" varchar NOT NULL,
    owner_id int NOT NULL REFERENCES users(id)
);

Watch out for the foreign key when we create the column:

owner_id = Column(Integer, ForeignKey("users.id"), nullable=False)

❓ Now, complete models.py "Relationships" sections in User and Tweet classes we have left unfilled so far.

  • We want to be able to access the user.tweets
  • We want to be able to access the tweet.owner

💡 This is call a 1-N relationship 💡 Read Alchemy docs to get the syntax right!

💡 Solution

In User

tweets = relationship("Tweet", back_populates="owner") # allows query "user.tweets"

In Tweet

owner = relationship("User", back_populates="tweets") # allows query "tweet.owner"

❓ Migrate your database with Tweets!

alembic revision --autogenerate -m "added tweets table"
alembic upgrade head

5.2 Create the schemas

❓ Now create the schemas for tweets. Here, as TweetCreate needs the same as TweetBase you can just use pass and add new fields. We keep them seperate in case at some point you wanted a new field in create only.

5.3 Implement the endpoints

❓ Here work through the tweet sections of main.py and crud.py to complete the app! The logic should be similar to the users endpoints.

  • Start by the POST/users/{user_id}/tweets/ route, and create some tweets!
  • Then, code the GET/users/{user_id}/tweets/, route and check that the relationship works by using user.tweets syntax

🧪 You can now run the tweets tests:

make test_tweets

If you have 3/3

git add .
git commit -m "tweets done"
git push origin main

6️⃣ Likes ❤️ (optional)

❓ Instruction (expand me)

🎯 Our last goal for today is to integrate the likes feature.

We want a new table:

CREATE TABLE likes (
	id serial NOT NULL PRIMARY KEY,
	owner_id int NOT NULL REFERENCES users(id),
	tweet_id int NOT NULL REFERENCES tweets(id)
);

as well as adding a like_count to tweets:

ALTER TABLE tweets
ADD COLUMN like_count INT DEFAULT 0

💡 We want our ORM to enable two new 1-N relationship:

  • user.likes <--> like.owner
  • tweet.likes <--> like.tweet

Create a Like model and update all the routes in models.py

💪 Notice the last route GET/users/{user_id}/liked_tweets/ in particular!

  • Here, we want to read all liked_tweets from a user.
  • This is a many-to-many (N-N) relationship between users and tweets:
    • A "user" has many " liked_tweets"
    • A "tweet" has many "likers"

🏁 Congratulation! You're now able to build your own REST API!

Let us know your progress status by running

make test
git add .
git commit -m "completed twitter api"
git push origin main

Further Read

❓ Instruction (expand me)

Optional 1

If you're done already, have a look at the way we build our tests, using a FastAPI TestClient to spawn a new server at each test, as well as a temporary local sqlite database to mimic your postgres!

Optional 2

📚 Bookmark and save for later this well-thought FastAPI-Backend-Template made by one of our Alumni Nino Lindenberg

It contains:

  • FastAPI
  • PostgreSQL via asynchronous SQLAlchemy 2.0
  • Alembic for async database migration
  • Docker
  • CI for backend application with GitHub Actions
  • CI for Pre-Commit auto-update
  • Pre-Commit hooks
  • CodeCov test report monitoring

01-twitter-crud's People

Contributors

yanka13 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.