nayaverdier / dyntastic Goto Github PK
View Code? Open in Web Editor NEWA DynamoDB library on top of Pydantic and boto3.
License: MIT License
A DynamoDB library on top of Pydantic and boto3.
License: MIT License
I just installed version 0.13.0 with pydantic v2 support. When importing the Dyntastic class, I get the below error:
I believe it's a matter of importing FieldInfo vs setting FieldInfo to pydantic.fields.Fiedlnfo. I opened a pull request with slight mods. Feel free to pull or mod as makes more sense to you.
from dyntastic import Dyntastic
Traceback (most recent call last):
File "", line 1, in
File "/Users/kirk/python-test/.venv/lib/python3.9/site-packages/dyntastic/init.py", line 1, in
from .attr import A, Attr
File "/Users/kirk/python-test/.venv/lib/python3.9/site-packages/dyntastic/attr.py", line 10, in
from . import pydantic_compat
File "/Users/kirk/python-test/.venv/lib/python3.9/site-packages/dyntastic/pydantic_compat.py", line 90, in
FieldInfo = pydantic.fields.FieldInfo
File "/Users/kirk/python-test/.venv/lib/python3.9/site-packages/pydantic/init.py", line 363, in getattr
return _getattr_migration(attr_name)
File "/Users/kirk/python-test/.venv/lib/python3.9/site-packages/pydantic/_migration.py", line 306, in wrapper
raise AttributeError(f'module {module!r} has no attribute {name!r}')
AttributeError: module 'pydantic' has no attribute 'fields'
When running Dyntastic.query()
over a large data set, with a filter_condition
, the resulting generator contains zero items (ie: raises StopIteration
). The exact same query, with the exact same filter_condition, executed on a smaller data set, returns the expected items from Dynamo.
Slightly difficult to explain over text, so I created a Loom to demonstrate the issue.
str
attribute.p_key
filter_expression
on the attribute.given the following code:
class DummyJobClass:
pass
class JobType(Enum):
DUMMY_JOB_TYPE = DummyJobClass
class Job(Dyntastic):
__table_name__ = "jobs"
__hash_key__ = "job_id"
__table_host__ = "http://localhost:8000"
__table_region__ = "us-east-1"
job_id: int
type: JobType
class Config:
json_encoders = {
JobType: lambda obj: obj.name
}
model = Job(job_id=1, type=JobType.DUMMY_JOB_TYPE)
model.save()
Error:
File "pydantic/json.py", line 90, in pydantic.json.pydantic_encoder
TypeError: Object of type 'type' is not JSON serializable
This happens because the serialize function uses the pydantic_encoder which works for dataclasses because does not implement .json() method.
For Sets and Decimals, we can implement a json_encoder in Dyntastic, but if someone overrides Config class does not work ๐ข.
Another option would be implement something like this:
https://github.com/pydantic/pydantic/blob/d9c2af3a701ca982945a590de1a1da98b3fb4003/pydantic/main.py#L242-L245
in this part of serialize:
else:
# handle types like datetime
return json.loads(json.dumps(data, default=pydantic_encoder))
I am trying from pynamodb to dyntastic and I am finding trouble to understand how to use indexes or which ones are supported. I've found some misleading clues:
https://github.com/nayaverdier/dyntastic/blob/main/dyntastic/main.py#L66
# TODO: support INCLUDE projection?
self.projection = "KEYS_ONLY" if keys_only else "ALL"
https://github.com/nayaverdier/dyntastic/blob/main/dyntastic/main.py#L98
validated, fields_set, errors = validate_model(model, item)
if errors:
# assume KEYS_ONLY or INCLUDE index
https://github.com/nayaverdier/dyntastic/blob/main/README.md?plain=1#L164
DynamoDB Indexes using a `KEYS_ONLY` or `INCLUDE` projection are supported:
So what are the projections supported?
Also, how could we use the dyntastic.Index
class so I can add the index as a field to another model?
Hi! We just started trying to integrate your library to our ecosystem but this is causing some conflicts with other libraries (because of the pinning of importlib-metadata
at the moment, but the other two pinnings will eventually cause trouble as well). Would you consider removing the pinning?
Thank you!
While considering leveraging dyntastic for a Litestar based API, I was wondering whether you'd considered enabling async capabilities? I see dyntastic is built on top of boto3, and so perhaps the obvious way to support async would be via aioboto3. Do you have any plans or thoughts in this space?
Hi team,
I'm currently very interested in using this for my projects.
One thing that I'd like to get out of the box is Localstack support for the models so that the host automatically gets rewritten to the Localstack when Localstack is configured.
I'm adding this issue here as I think it would be a nice QOL upgrade, and perhaps I will get to it if I find time.
Hi! Awesome library, been trying it out and have a couple of questions.
I made a simple class:
import uuid
from datetime import datetime
class Counter(Dyntastic):
__table_name__ = "counters"
__hash_key__ = "id"
id: str = Field(default_factory=lambda: str(uuid.uuid4()))
value: int = 0
created: datetime = Field(default_factory=datetime.now)
updated: datetime = Field(default_factory=datetime.now)
And I made a Lambda handler that does this (the intent is to increment the value by 1):
def run(event: EventBridgeEvent, context: LambdaContext):
counter = Counter.safe_get(event.detail["id"])
if not counter:
counter = Counter(id=event.detail["id"])
counter.save()
counter.update(
A.value.plus(1),
A.updated.set(datetime.now()),
)
Although I see the counter being initialized in Dynamo, the updates don't seem to take effect. For reference this does work:
def run(event: EventBridgeEvent, context: LambdaContext):
counter = Counter.safe_get(event.detail["id"])
if not counter:
counter = Counter(id=event.detail["id"])
counter.save()
counter.value += 1
counter.updated = datetime.now()
counter.save()
So I'm just wondering if maybe I'm misunderstanding something, or if I've stumbled on a bug. I'm also not clear on the difference between these two ways to update, I guess counter.update()
is more efficient?
Any help you can provide here would be awesome, thank you for taking the time to read this over ๐
Hi there ๐ Is there any way to set a custom endpoint_url
for dynamodb? For local development, I'm using dynamodb docker image, so looking for a way how to set my http://localhost:8000
for boto3 resource/client ๐ค
Any suggestions on how to set it up? Thanks in advance.
Is there any plan to wrap the batch read or write methods available in boto3?
Perhaps something like:
Event.batch_get(
"key1",
"key2",
"key3"
)
Hey all, just stumbled on this project and I LOVE it so far.
I use a the docker based amazon/dynamodb-local:latest
image for local testing and deploy tables into AWS with cdk, which makes the tables names variable based on the deployment.
When testing locally, I have to change the model to set __table_host__
and __table_region__
to the local ddb instance, and remember to remove that before commit/deploy.
And for the __table_name__
, I currently have to hard code the table name in cdk so that it matches the model definition. I'd rather use the dynamically generated table names and then set that as an ENV Variable on my lambdas/containers.
I think it would be great if these props be set natively using environment variables, rather than hard coded in the models. That way they would be easy to over-ride for local testing/unit tests and could be dynamically set when deployed to AWS.
If you are in support of this idea, I'll create a PR to implement.
given a Example Table with secondary index that ha projection_type = 'KEYS_ONLY':
class Example(Dyntastic):
__table_name__ = "example"
__hash_key__ = "id"
__table_host__ = "http://localhost:8000"
__table_region__ = "us-east-1"
some_required_attribute: str
when you get the results from dynamodb only return the __hash_key__
because of projection_type, this raises an error when calling the query() method because some_required_attribute
is missing.
query_results = cls.query(
A.is == id,
index="example-index",
)
You can make optional all the fields to make this work, but I think that is a bad idea when using schema validation, lol.
Feature Idea:
When querying secondary indexes with KEYS_ONLYS projection would be great to:
(1) disable the validation of the pydantic schema upon getting the data and validate only keys and return the object with the partion keys only.
(2) [optional argument for query] query method return the full object using GET operations after getting (1)
Following on from #21 (comment).
I don't really have a clear path to this, but I'd expect something like:
Model.update("id", updates..., refresh=False (or true!))
Maybe it needs a new name beyond update
, not sure. I definitely think this would be a great addition though.
Hi team,
So I've had a look at the mechanism for querying and I feel as if it's very close to complete. My only question would be to perhaps rework the way that the query works to allow for the feeding in of class and its hinted variables. Then the work is offloaded to the Attr library internally.
This would allow for auto-completion when querying an object. I haven't played with the library but from the looks of it, you won't get static analysis on those variables from the Attr class.
This I think would be the final implementation change that would make the library extra-ordinary and bring it into wide usage.
I will re-iterate that this library is fantastic and I'm extremely grateful for the work already done on it. I'm curious if I can find a way to implement the above so I'll try take a look at it.
Support Pydantic v2
First I wanted to say thank you for creating this library. I've been looking for a way to represent DynamoDB "models" using something like Pydantic but didn't want a full ORM like Pynamo. We're having a discussion about it on aws-powertools/powertools-lambda-python#2053.
Any thoughts on supporting TransactWriteItems
and TransactGetItems
? It would be cool to use a context manager with writing a transaction to automatically flush it once it reaches 100 items.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.