Git Product home page Git Product logo

schireson / pytest-mock-resources Goto Github PK

View Code? Open in Web Editor NEW
172.0 10.0 16.0 1.06 MB

Pytest Fixtures that let you actually test against external resource (Postgres, Mongo, Redshift...) dependent code.

Home Page: https://pytest-mock-resources.readthedocs.io/en/latest/quickstart.html

License: MIT License

Python 99.50% Makefile 0.50%
library pytest python docker postgres mongodb redshift pytest-fixtures tidepod mongo

pytest-mock-resources's People

Contributors

dancardin avatar jarrettalexander77 avatar jonm0 avatar kianmeng avatar langelgjm avatar michaelbukachi avatar mlambert-zotec avatar oakhan3 avatar ocaballeror avatar prateekpisat avatar willmclaren avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-mock-resources's Issues

Add in Left/Right UDF's for Redshift

Describe the bug
Redshift left/right behave differently than the left/right in postgres. Redshift allows integers to be passed in but postgres does not. We have fixed this locally by creating some UDF's in our fixture upon startup. The SQL for these is as follows:

CREATE OR REPLACE FUNCTION left (
    s1 integer, s2 integer
) RETURNS integer
    LANGUAGE sql IMMUTABLE LEAKPROOF AS
'SELECT left(s1::text, s2)::integer';

The UDF for right looks the same.

We would like to contribute this back to the repo so that future users don’t have to deal with the same issue. I found this file here as a starting point but noticed that it seems pretty full of just datediff stuff with some reusable components near the top. I am looking for guidance on where to add this change. I was thinking of splitting out the date diff into its own file and adding this new change as part of another new file in that same folder.

Let me know if you have any other thoughts. I will begin work on this once we have some ideas as to where it will live. I believe that we could address issue #53 as well after we know a better way to layout those UDF's.

SQLAlchemy 2.0 Support

Is your feature request related to a problem? Please describe.
Hello,
So we have been migrated most of the code to support SQAlchemy 2.0 syntax in preparation of its release. We have been using this guide.. In this step, we noticed quite a number of warnings were coming from pytest-mock-resources.

Describe the solution you'd like
A clear and concise description of what you want to happen.
I'm wondering if it's possible to have the full support of SQLAlchemy 2.0. I'm not sure if making it compatible will break changes in the older versions. Willing to do a PR for this.

Additional context
Add any other context or screenshots about the feature request here.
Below is a list of warnings thrown (there are only 2 actually):

  /home/michael/.cache/pypoetry/virtualenvs/app-d-AWDS2Y-py3.9/lib/python3.9/site-packages/pytest_mock_resources/fixture/database/relational/postgresql.py:99: RemovedIn20Warning: Passing a string to Connection.execute() is deprecated and will be removed in version 2.0.  Use the text() construct, or the Connection.exec_driver_sql() method to invoke a driver-level SQL string. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
    conn.execute(

  /home/michael/.cache/pypoetry/virtualenvs/app-d-AWDS2Y-py3.9/lib/python3.9/site-packages/pytest_mock_resources/fixture/database/relational/postgresql.py:99: RemovedIn20Warning: The current statement is being autocommitted using implicit autocommit, which will be removed in SQLAlchemy 2.0. Use the .begin() method of Engine or Connection in order to use an explicit transaction for DML and DDL statements. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
    conn.execute(

Engine CM causes patched methods to no longer be patched

Describe the bug
When using a context manager (eg with connection.redshift_engine.begin() as redshift_connection:, the copy redshift function is no longer patched into the psycopg2 engine.

Environment
All

To Reproduce
Steps to reproduce the behavior:
Open a context managed redshift connection:
with connection.redshift_engine.begin() as redshift_connection:
execute a redshift copy statement with non-postgres syntax (include a credentials argument, for example)

Expected behavior
The copy statement is executed, and the data from the source appears in the destination.

Actual Behavior

self = <sqlalchemy.dialects.postgresql.psycopg2.PGDialect_psycopg2 object at 0x11f63d240>
cursor = <cursor object at 0x11ebf6238; closed: -1>
statement = "copy automri_assignment\n                        from 's3://xxx/xxx...asnull\n                        ignoreblanklines\n                        trimblanks;\n                        commit;"
parameters = {}
context = <sqlalchemy.dialects.postgresql.psycopg2.PGExecutionContext_psycopg2 object at 0x11e93a978>

    def do_execute(self, cursor, statement, parameters, context=None):
>       cursor.execute(statement, parameters)
E       psycopg2.errors.SyntaxError: syntax error at or near "credentials"
E       LINE 3:                         credentials 'aws_access_key_id=xxx...
E                                       ^

Additional context
Feel free to reach out for more context if needed!

Exception trying to unlink pmr.json when when ran with --pmr-multiprocess-safe

Describe the bug
When running tests with pytest-xdist, using the flag --pmr-multiprocess-safe, the tests are executed correctly but pytest_sessionfinish throws an error: PermissionError: [WinError 32] The process cannot access the file because it is being used by another process when trying to unlink pmr.json while holding its lock.

Environment

  • Host OS: Windows 11
  • Docker image if applicable: postgres default
  • Python Version: 3.10.2
  • Virtualenv/Pyenv etc.. if applicable: poetry
[tool.poetry.dependencies]
python = "^3.10"

[tool.poetry.dev-dependencies]
pytest = "^7.0"
pytest-xdist = "^2.5"
pytest-mock-resources = { extras = ["postgres-binary"], version = "^2.2" }
pywin32 = ">227"

To Reproduce
Steps to reproduce the behavior:

  1. Create the following files:
# conftest.py
from pytest_mock_resources import create_postgres_fixture
pg_engine = create_postgres_fixture()
# test_main.py
def test(pg_engine):
    with pg_engine.connect() as conn:
        assert conn.execute('select 1').scalar() == 1
  1. Run 'pytest -n2 --pmr-multiprocess-safe'
  2. Test executes correctly, but at the end the error is printed

Expected behavior
No exception is raised and pmr.json is removed from the tmp folder.

Actual Behavior

Testing started at 20:37 ...
Launching pytest with arguments -n2 --pmr-multiprocess-safe --no-header --no-summary -q in F:\code\pmr_bug_repr

============================= test session starts =============================
gw0 I / gw1 I
[gw0] win32 Python 3.10.2 cwd: F:\code\pmr_bug_repr
[gw1] win32 Python 3.10.2 cwd: F:\code\pmr_bug_repr
[gw0] Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)]
[gw1] Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)]
gw0 [1] / gw1 [1]

scheduling tests via LoadScheduling

test_main.py::test 
[gw0] [100%] PASSED test_main.py::test 
Traceback (most recent call last):
  File "C:\Program Files\JetBrains\PyCharm 2021.1\plugins\python\helpers\pycharm\_jb_pytest_runner.py", line 51, in <module>
    sys.exit(pytest.main(args, plugins_to_load + [Plugin]))
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\_pytest\config\__init__.py", line 165, in main
    ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\pluggy\_hooks.py", line 265, in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\pluggy\_callers.py", line 60, in _multicall
    return outcome.get_result()
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\pluggy\_result.py", line 60, in get_result
    raise ex[1].with_traceback(ex[2])
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\pluggy\_callers.py", line 39, in _multicall
    res = hook_impl.function(*args)
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\_pytest\main.py", line 315, in pytest_cmdline_main
    return wrap_session(config, _main)
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\_pytest\main.py", line 303, in wrap_session
    config.hook.pytest_sessionfinish(
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\pluggy\_hooks.py", line 265, in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\pluggy\_callers.py", line 55, in _multicall
    gen.send(outcome)
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\_pytest\terminal.py", line 792, in pytest_sessionfinish
    outcome.get_result()
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\pluggy\_result.py", line 60, in get_result
    raise ex[1].with_traceback(ex[2])
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\pluggy\_callers.py", line 39, in _multicall
    res = hook_impl.function(*args)
  File "C:\Users\[me]\AppData\Local\pypoetry\Cache\virtualenvs\pmr-bug-repr-pLX3u5c4-py3.10\lib\site-packages\pytest_mock_resources\hooks.py", line 103, in pytest_sessionfinish
    fn.unlink()
  File "C:\Users\[me]\AppData\Local\Programs\Python\Python310\lib\pathlib.py", line 1204, in unlink
    self._accessor.unlink(self)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\[me]\\AppData\\Local\\Temp\\pytest-of-[me]\\pmr.json'

Process finished with exit code 1

Additional context
I was able to fix it by tabbing left by 1 the call to py fn.unlink() in hooks.py, moving it outside the lockfile.
Turning this

def pytest_sessionfinish(session, exitstatus):
    ...
    with load_container_lockfile(fn) as containers:
        ...
        fn.unlink()

into this

def pytest_sessionfinish(session, exitstatus):
    ...
    with load_container_lockfile(fn) as containers:
        ...
    fn.unlink()

Make redshift a "first-class" fixture

Is your feature request related to a problem? Please describe.
Today redshift just piggybacks off postgres's container definition to start its container.

This precludes us from making database-wide changes, like SET session_replication_role = 'replica'; which is a potential option for #62 and other specific kinds of tests.

If we used the same existing container startup mechanisms for redshift as we do for postgres, but left the host/port/user/pass the same; then PMR will think it has already been started. And then we'll either want to prefix the database name with "postgres/redshift" or ensure they're both using the same PMR-specific table for record-keeping on tests.

With that said, we'd have the same default behavior as today, but with the ability to theoretically configure the settings to be different for redshift and produce a different container.

Allow specificying an image when using pmr cli

It would be nice if we can specify the image version downloaded when running pmr. Currently, the image versions are hardcoded in the code. Maybe we can use an environment variable? πŸ€”

get_sqlalchemy_engine() not compatible with asyncpg

Describe the bug
I'm unable to use pytest_mock_resources.create_postgres_fixture(async_=True) without psychopg2 installed.

I'm seeing:

E       RuntimeError: Cannot use postgres/redshift fixtures without psycopg2.
E       pip install pytest-mock-resources[postgres] or pytest-mock-resources[[postgres-binary].
E       Additionally, pip install pytest-mock-resources[redshift] for redshift fixtures.

It's not documented that pytest_mock_resources.sqlalchemy.create_async_engine() should be used, and also that doesn't auto-create the container either.

Environment

  • Host OS: Debian 11 Gnu/Linux
  • Docker image if applicable: N/A
  • Python Version: 3.11
  • Virtualenv/Pyenv etc.. if applicable: N/A

To Reproduce
Steps to reproduce the behavior:

  1. pip install pytest_mock_resources[postgres-async]
  2. Run pytest_mock_resources.create_postgres_fixture(async_=True)

Additional context
I'd expect either:

  1. create_postgres_fixture() to work out the postgres driver to connect with, and not always depend on psycopg2
  2. A separate fixture create_postgres_fixture_async()
  3. Bare minium the postgres-async extra should install everything needed inc psychopg2 but that's adding dev dependencies.

sqlite missing json support

Missing json/jsonb column type support. This is supportable by a SQLiteDialect_pysqlite subclass and a custom SQLiteTypeCompiler

redshift_fixture does not play nice with pandas.DataFrame.to_sql

When attempting to use insert data into the redshift fixture via pandas.DataFrame.to_sql, no error message of any kind is emitted but no rows get inserted. Since there is a workaround (see below), it would be great if at least we can track down any error messages being dropped and report back to the user.

Ideally, of course, we fix the underlying issue to make to_sql work correctly, but the urgency is not as great since a workaround exists.

The workaround is

pandas_sql_engine = SQLDatabase(engine=_redshift, schema=schema)
sql_table = SQLTable(
    name=table,
    pandas_sql_engine=pandas_sql_engine,
    frame=data,
    schema=schema,
    index=False,
)
sql_table.insert()

Session fixtures

As an engineer writing tests, most of the time tests on free functions are operating on sessions, not engines. Yielding a session scoped to the test body would be ideal in these cases

Redshift fixture should not enforce primary, foreign key, or unique constraints

Is your feature request related to a problem? Please describe.

Because the Redshift fixture is based on Postgres, and Postgres enforces constraints, the Redshift fixture does not behave like Redshift. E.g., the Redshift fixture will raise integrity errors when attempting to insert rows with duplicate primary keys, whereas Redshift itself will not.

The goal of the Redshift fixture is to provide a fixture that behaves like Redshift. This includes the behavior of not respecting constraints, since that very behavior is likely to be desirable to test.

Describe the solution you'd like

We can do this ad-hoc now by something like:

alter_table_drop_constraint_statements = [
    f"ALTER TABLE {table} DROP CONSTRAINT IF EXISTS {table.name}_pkey CASCADE" for table in Base.metadata.tables.values()
]
redshift = create_redshift_fixture(Base, Statements(*alter_table_drop_constraint_statements), scope="session")

But this could also happen automatically inside the fixture using SQLAlchemy's ability to execute custom DDL on an after_create DDL event.

Describe alternatives you've considered

An alternative would be to override the SQLAlchemy Redshift dialect to not emit constraints during DDL compilation. However, Redshift does accept DDL with such constraints, and as such this solution does not provide true parity between the fixture and Redshift.

Support non-session container fixtures for all docker resources

Is your feature request related to a problem? Please describe.
We should support non-session fixtures for all resources. In particular, we cannot model all possible usecases without this

A couple usecases that come to mind for databases are:

  • code is configured to connect to a specific database
    • blackbox testing PMR with PMR would be difficult!
  • testing e.g. postgres roles/permissions, e.g. global things in the context of the resource
  • resources which do not support multitenancy well.

Given that all docker-based fixtures route through the same container-producing function, it feels like we can enable a common set of options available on all fixtures. As much this one, as the scope of the fixture you get, itself.

DISTKEY not supported

Describe the bug
One of the main changes in redshift vs postgres is how index/keys are used. More specifically, one can specify a DISTKEY and SORTKEY for a table like

CREATE TABLE myschema.mytable (
col1 INT
,col2 VARCHAR(20)
)
DISTKEY(col1)
SORTKEY(col1, col2);

This can improve performance by helping redshift to keep joins on a single node, skip partitions etc

For my tests, I don't care about the performance, but I need the production SQL to look like this, and I want my tests to test actual code, not a similar but not the same variation.

It would make this mock library more complete and should probably be handled like UNLOAD etc. NB: I am a newbie using this library, so have haven't checked how this is handled.

Environment

  • Host OS
    Using Windows, setting PYTEST_MOCK_RESOURCES_HOST=localhost (somehow my networking failed for host.docker.internal, but it could be Docker Desktop or Corporate Security)

  • Docker image if applicable
    Postgres with default params as provided by pytest version 1.4.1

  • Python Version
    Python 3.8.5 64-bit

  • Virtualenv/Pyenv etc.. if applicable
    virtualenv, installing pytest-mock-resources with postgres binaries and redshift

(Also in requirements.txt before trying pytest-mock-resources
SQLAlchemy
psycopg2
sqlalchemy-redshift
pandas

To Reproduce
Steps to reproduce the behavior:

  1. Create a test case executing sql

redshift = create_redshift_fixture()
def test_redshift_with_distkey(redshift):
with redshift.connect() as conn:
conn.execute('create table table1 ( col1 int, col2 varchar(20)) distkey(col1) sortkey(col1, col2);')

  1. Run pytest
  2. See error
    Error is logged and test is aborted
    (psycopg2.errors.SyntaxError) syntax error at or near "DISTKEY"

Expected behavior
Since DISTKEY and SORTKEY probably arent relevant in test cases, ignore / remove this part of SQL before executing in docker postgres

Actual Behavior
Described above - can't disclose much more anyways.

Additional context
Running simpler SQL works in my current setup

Seems related to #82

No such event 'before_execute' for target '<sqlalchemy.orm.session.Session object ...

Describe the bug
Description of what the bug is.

Environment

  • Ubuntu 22.04.1 LTS
  • 3.9
  • venv

To Reproduce
Trying to reproduce the example given

tests/test.temp.py:

# Redshift Example:
from pytest_mock_resources import create_redshift_fixture
from package.utilities import sql_sum

db = create_redshift_fixture()
# or
db = create_redshift_fixture(session=True)

def test_sql_sum(db):
   sql_sum(db)


# Postgres Example:
from pytest_mock_resources import create_postgres_fixture
from package.utilities import sql_sum

db = create_postgres_fixture()
# or
db = create_postgres_fixture(session=True)

def test_sql_sum(db):
   sql_sum(db)

src/util/temp.py:

# Redshift Example:
from pytest_mock_resources import create_redshift_fixture
from  util.temp import sql_sum

db = create_redshift_fixture()
# or
db = create_redshift_fixture(session=True)

def test_sql_sum(db):
   sql_sum(db)

Expected behavior
Test to be concluded successfully

Actual Behavior

==================================================================== test session starts =====================================================================
platform linux -- Python 3.9.12, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/zanini/repo/RecSys, configfile: tox.ini
plugins: mock-resources-2.6.0, xdist-1.34.0, postgresql-4.1.1, cov-2.12.1, forked-1.4.0, anyio-3.6.2
collected 1 item                                                                                                                                             

tests/test_temp.py E                                                                                                                                   [100%]

=========================================================================== ERRORS ===========================================================================
_______________________________________________________________ ERROR at setup of test_sql_sum _______________________________________________________________

pmr_redshift_container = python_on_whales.Container(id='c2e8218e3a27', name='pmr_redshift_5532')
pmr_redshift_config = RedshiftConfig(username='user', image='postgres:9.6.10-alpine', port=5532, root_database='dev', ci_port=5432, host='localhost', password='password')

    @pytest.fixture(scope=scope)
    def _sync(pmr_redshift_container, pmr_redshift_config):
        engine_manager = _create_engine_manager(pmr_redshift_config)
        database_name = engine_manager.engine.url.database
    
        for engine in engine_manager.manage_sync():
>           sqlalchemy.register_redshift_behavior(engine)

../.venv/lib/python3.9/site-packages/pytest_mock_resources/fixture/redshift/__init__.py:101: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../.venv/lib/python3.9/site-packages/pytest_mock_resources/patch/redshift/sqlalchemy.py:14: in register_redshift_behavior
    event.listen(engine, "before_execute", receive_before_execute, retval=True)
../.venv/lib/python3.9/site-packages/sqlalchemy/event/api.py:115: in listen
    _event_key(target, identifier, fn).listen(*args, **kw)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

target = <sqlalchemy.orm.session.Session object at 0x7f8c1050e6a0>, identifier = 'before_execute', fn = <function receive_before_execute at 0x7f8c11de7ee0>

    def _event_key(target, identifier, fn):
        for evt_cls in _registrars[identifier]:
            tgt = evt_cls._accept_with(target)
            if tgt is not None:
                return _EventKey(target, identifier, fn, tgt)
        else:
>           raise exc.InvalidRequestError(
                "No such event '%s' for target '%s'" % (identifier, target)
            )
E           sqlalchemy.exc.InvalidRequestError: No such event 'before_execute' for target '<sqlalchemy.orm.session.Session object at 0x7f8c1050e6a0>'

../.venv/lib/python3.9/site-packages/sqlalchemy/event/api.py:29: InvalidRequestError
================================================================== short test summary info ===================================================================
ERROR tests/test_temp.py::test_sql_sum - sqlalchemy.exc.InvalidRequestError: No such event 'before_execute' for target '<sqlalchemy.orm.session.Session obj...
====================================================================== 1 error in 7.82s ======================================================================

Additional context
I managed to fix the issue by noticing the event "before_execute" and the event in the following call "before_cursor_execute" are core events the should be called by a connection.

Changing register_redshift_behavior in pytest_mock_resources/patch/redshift/sqlalchemy.py

def register_redshift_behavior(engine):
    """Substitute the default execute method with a custom execute for copy and unload command."""

    event.listen(engine, "before_execute", receive_before_execute, retval=True)
    event.listen(engine, "before_cursor_execute", receive_before_cursor_execute, retval=True)

to

def register_redshift_behavior(engine):
   """Substitute the default execute method with a custom execute for copy and unload command."""

   event.listen(engine.connection(), "before_execute", receive_before_execute, retval=True)
   event.listen(engine.connection(), "before_cursor_execute", receive_before_cursor_execute, retval=True)

solved the problem, since it was passing a sqlalchemy.orm.session.Session object instead of a connection one

Not sure if that was due to some issues on my system, since that is the main code I assume other have previously used with no problem.

Tests fail in container

Describe the bug
I'm trying to run tests within a container but they keep failing

Environment

  • Host OS - Ubuntu
  • Docker image if applicable - python:3.6.8-slim
  • Python Version - python:3.6.8
  • Virtualenv/Pyenv etc.. if applicable

To Reproduce
Steps to reproduce the behavior:

  1. Create simple test && setup fixtures
  2. Build test container
  3. Run test container with docker run -v /var/run/docker.sock:/var/run/docker.sock image
    Here's the Dockerfile
FROM python:3.6.8-slim

RUN apt-get update

RUN apt-get install -y apt-file

RUN apt-file update

RUN apt-get install -y libpq-dev gcc

RUN mkdir /project

ADD requirements.txt /project

WORKDIR /project

RUN pip install --upgrade pip

RUN pip install -r requirements.txt

RUN apt-get install -y wget

RUN wget -O get-docker.sh https://get.docker.com

RUN chmod +x get-docker.sh && ./get-docker.sh

COPY tests /project/tests
COPY pytest.ini /project
COPY .env.test /project

CMD ["pytest", "-x", "tests"]

Expected behavior
Tests to run successfully.

Actual Behavior

    def gevent_wait_callback(conn, timeout=None):
        """A wait callback useful to allow gevent to work with Psycopg."""
        while 1:
>           state = conn.poll()
E           sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
E               Is the server running on host "localhost" (127.0.0.1) and accepting
E               TCP/IP connections on port 5532?
E           could not connect to server: Cannot assign requested address
E               Is the server running on host "localhost" (::1) and accepting
E               TCP/IP connections on port 5532?
E           
E           (Background on this error at: http://sqlalche.me/e/e3q8)

/usr/local/lib/python3.6/site-packages/psycogreen/gevent.py:32: OperationalError

.....
# more errors
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/local/lib/python3.6/site-packages/pytest_mock_resources/container/__init__.py:37: in retriable_check_fn
    check_fn()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    def check_postgres_fn():
        try:
            get_sqlalchemy_engine(config["root_database"])
        except sqlalchemy.exc.OperationalError:
            raise ContainerCheckFailed(
                "Unable to connect to a presumed Postgres test container via given config: {}".format(
>                   config
                )
            )
E           pytest_mock_resources.container.ContainerCheckFailed: Unable to connect to a presumed Postgres test container via given config: {'username': 'user', 'password': 'password', 'port': 5532, 'root_database': 'dev', 'image': 'postgres:9.6.10-alpine'}

/usr/local/lib/python3.6/site-packages/pytest_mock_resources/container/postgres.py:56: ContainerCheckFailed

Additional context
When I run tests outside the container they run successfully.

Dependabot warning for docker subdependency

Describe the bug
Dependabot alert for docker dependency.

Environment

  • Windows
  • 3.9

To Reproduce
Getting dependabot alert for pywin32 which is being used by docker dependency. See below image.

Capture

Additional context
It appears there are thoughts of abandoning the docker python repo. Refer to this comment as a possible alternative docker/docker-py#2989 (comment).

Dependency on attrs package should be explicit

Hi, I'm one of the pytest maintainers.

I am working on a PR to pytest that will remove the attrs dependency. This means that plugins that use attrs but do not explicitly require it will break. From a search among pytest plugins, I found that pytest-mock-resources is one such plugin.

attrs is used in this file: https://github.com/schireson/pytest-mock-resources/blob/d522be82cd3a08b886fb504429b315f30d7280eb/src/pytest_mock_resources/patch/redshift/mock_s3_copy.py

Redshift patches are applied to all psycop2 database connections in that test

Describe the bug
The redshift patch of psycopg2 connect and/or sqlalchemy create_engine is unconditional, and therefore will modify a call to those functions for engines which are not the one we're trying to connect to.

I.e. if you create 2 fixtures, a postgres one and a redshift one and attempt to use them both in the same test.

Possible solution: Conditionally apply the patch based on the DSN of the call. Ideally on as many fields as we can to stay frugal on the patch. like certainly drivername/host/port and maybe database

Please don't depend on pandas

Pandas is an annoying and large dependency that should be simple to avoid, by using the built-in csv module in python.

Register the marks which PMR exports

Is your feature request related to a problem? Please describe.
pytest generates a bunch of warnings (at least PMR internally) because it creates a bunch of marks and does not register them.

Add support for MySQL 8

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

It seems that MySQL 8 (specifically 8.0.23) requires setting only MYSQL_ROOT_PASSWORD when using 'root' user.

2021-12-03 09:32:36+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.23-1debian10 started.
2021-12-03 09:32:36+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2021-12-03 09:32:36+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.23-1debian10 started.
2021-12-03 09:32:36+00:00 [ERROR] [Entrypoint]: MYSQL_USER="root", MYSQL_USER and MYSQL_PASSWORD are for configuring a regular user and cannot be used for the root user
    Remove MYSQL_USER="root" and use one of the following to control the root user password:
    - MYSQL_ROOT_PASSWORD
    - MYSQL_ALLOW_EMPTY_PASSWORD
    - MYSQL_RANDOM_ROOT_PASSWORD

However, private fixture _mysql_container is giving MYSQL_USER and MYSQL_ROOT_PASSWORD together, and because 'root' is the default value of MysqlConfig's username field, default config can never run correctly.

One might consider overriding the username field, but that requires setting MYSQL_PASSWORD environment variable to correctly create the custom user.

2021-12-03 09:31:33+00:00 [Warn] [Entrypoint]: MYSQL_USER specified, but missing MYSQL_PASSWORD; MYSQL_USER will not be created

I have tried overriding the _mysql_container fixture to provide additional environment variable with name MYSQL_PASSWORD, but the created user seems to be missing some privileges internally required by pytest-mock-resources.

data = b"\xff\x14\x04#42000Access denied for user 'test'@'%' to database 'pytest_mock_resource_db_1'"

    def raise_mysql_exception(data):
        errno = struct.unpack("<h", data[1:3])[0]
        errval = data[9:].decode("utf-8", "replace")
        errorclass = error_map.get(errno)
        if errorclass is None:
            errorclass = InternalError if errno < 1000 else OperationalError
>       raise errorclass(errno, errval)
E       sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1044, "Access denied for user 'test'@'%' to database 'pytest_mock_resource_db_1'")
E       [SQL: CREATE DATABASE pytest_mock_resource_db_1]
E       (Background on this error at: https://sqlalche.me/e/14/e3q8)

../../../../venv/lib/python3.7/site-packages/pymysql/err.py:143: OperationalError

Describe the solution you'd like
A clear and concise description of what you want to happen.

A proper support for different versions of MySQL, or additional fields in MysqlConfig to allow for a more graceful overriding.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

I'm currently manually overriding the private _mysql_container_fixture as a temporary workaround.

@pytest.fixture(scope="session")
def _mysql_container(pmr_mysql_config):
    result = get_container(
        pmr_mysql_config,
        {3306: pmr_mysql_config.port},
        {
            "MYSQL_DATABASE": pmr_mysql_config.root_database,
            "MYSQL_ROOT_PASSWORD": pmr_mysql_config.password,
        },
        check_mysql_fn,
    )

    yield next(iter(result))

Additional context
Add any other context or screenshots about the feature request here.

URL deprecation in source

When converting a Postgres fixture to url as in the docs, I get the following error

tests/test_sample_db_mocking.py::test_sql_sum
  /home/gab/doc/supplybrain-data-validator/.venv/lib/python3.8/site-packages/pytest_mock_resources/fixture/database/generic.py:34: SADeprecationWarning: Calling URL() directly is deprecated and will be disabled in a future release.  The public constructor for URL is now the URL.create() method.
    return URL(

Which relates to this line.

I expect that the correction would be simply something like

return URL.create(

$round operation for MongoDB

Describe the bug
Description of what the bug is.

Environment
Ubuntu
Python 3.10
virtualenv==20.0.17

To Reproduce
Steps to reproduce the behavior:

  1. Prepare collection with name and value fields. Add a couple of docs.
  2. Run
db.my_col.aggregate(
    [
        {
            '$match': {
                'name': some_value,
            },
        },
        {
            '$group': {
                '_id': None,
                'lowest': {
                    '$min': '$value',
                },
            },
        },
        {
            '$project': {
                'lowest': {
                    '$round': ['$lowest', 3],
                },
            },
        },
    ]
)

  1. See error
    pymongo.errors.OperationFailure: Unrecognized expression '$round', full error: {'ok': 0.0, 'errmsg': "Unrecognized expression '$round'", 'code': 168, 'codeName': 'InvalidPipelineOperator'}

Expected behavior
It should return

{
    'lowest': value,
}

Actual Behavior

venv/lib/python3.10/site-packages/werkzeug/test.py:1131: in get
    return self.open(*args, **kw)
venv/lib/python3.10/site-packages/flask/testing.py:235: in open
    return super().open(
venv/lib/python3.10/site-packages/werkzeug/test.py:1076: in open
    response = self.run_wsgi_app(request.environ, buffered=buffered)
venv/lib/python3.10/site-packages/werkzeug/test.py:945: in run_wsgi_app
    rv = run_wsgi_app(self.application, environ, buffered=buffered)
venv/lib/python3.10/site-packages/werkzeug/test.py:1233: in run_wsgi_app
    app_rv = app(environ, start_response)
venv/lib/python3.10/site-packages/flask/app.py:2091: in __call__
    return self.wsgi_app(environ, start_response)
venv/lib/python3.10/site-packages/flask/app.py:2076: in wsgi_app
    response = self.handle_exception(e)
venv/lib/python3.10/site-packages/flask/app.py:2073: in wsgi_app
    response = self.full_dispatch_request()
venv/lib/python3.10/site-packages/flask/app.py:1518: in full_dispatch_request
    rv = self.handle_user_exception(e)
venv/lib/python3.10/site-packages/flask/app.py:1516: in full_dispatch_request
    rv = self.dispatch_request()
venv/lib/python3.10/site-packages/flask/app.py:1502: in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
src/api/__init__.py:205: in place_where_i_invoke_it
    db.my_col.aggregate(
venv/lib/python3.10/site-packages/pymongo/collection.py:2104: in aggregate
    return self._aggregate(
venv/lib/python3.10/site-packages/pymongo/collection.py:2026: in _aggregate
    return self.__database.client._retryable_read(
venv/lib/python3.10/site-packages/pymongo/mongo_client.py:1359: in _retryable_read
    return func(session, server, sock_info, secondary_ok)
venv/lib/python3.10/site-packages/pymongo/aggregation.py:134: in get_cursor
    result = sock_info.command(
venv/lib/python3.10/site-packages/pymongo/pool.py:742: in command
    return command(
venv/lib/python3.10/site-packages/pymongo/network.py:174: in command
    helpers._check_command_response(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = {'code': 168, 'codeName': 'InvalidPipelineOperator', 'errmsg': "Unrecognized expression '$round'", 'ok': 0.0}
max_wire_version = 6, allowable_errors = None, parse_write_concern_error = True

    def _check_command_response(
        response, max_wire_version, allowable_errors=None, parse_write_concern_error=False
    ):
        """Check the response to a command for errors."""
        if "ok" not in response:
            # Server didn't recognize our message as a command.
            raise OperationFailure(
                response.get("$err"), response.get("code"), response, max_wire_version
            )
    
        if parse_write_concern_error and "writeConcernError" in response:
            _error = response["writeConcernError"]
            _labels = response.get("errorLabels")
            if _labels:
                _error.update({"errorLabels": _labels})
            _raise_write_concern_error(_error)
    
        if response["ok"]:
            return
    
        details = response
        # Mongos returns the error details in a 'raw' object
        # for some errors.
        if "raw" in response:
            for shard in response["raw"].values():
                # Grab the first non-empty raw error from a shard.
                if shard.get("errmsg") and not shard.get("ok"):
                    details = shard
                    break
    
        errmsg = details["errmsg"]
        code = details.get("code")
    
        # For allowable errors, only check for error messages when the code is not
        # included.
        if allowable_errors:
            if code is not None:
                if code in allowable_errors:
                    return
            elif errmsg in allowable_errors:
                return
    
        # Server is "not primary" or "recovering"
        if code is not None:
            if code in _NOT_PRIMARY_CODES:
                raise NotPrimaryError(errmsg, response)
        elif HelloCompat.LEGACY_ERROR in errmsg or "node is recovering" in errmsg:
            raise NotPrimaryError(errmsg, response)
    
        # Other errors
        # findAndModify with upsert can raise duplicate key error
        if code in (11000, 11001, 12582):
            raise DuplicateKeyError(errmsg, code, response, max_wire_version)
        elif code == 50:
            raise ExecutionTimeout(errmsg, code, response, max_wire_version)
        elif code == 43:
            raise CursorNotFound(errmsg, code, response, max_wire_version)

Document semantics around queries on global objects

Describe the bug
It may or may not be obvious as a client that we (by default) reuse the container, and the semantic effects that that has.

i.e.

  • may interfere with queries against tables like pg_database

The solution, if this effects someone would be to change the behavior (either by some future "strategy" feature, or the fixture scope) by opting out of container sharing and into your container start/stoping per test.

Mysql/MariaDb support?

Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like
Add mysql/mariadb integration

Describe alternatives you've considered
None at the moment

Additional context
Is this currently being worked on? If not, I'd like to take it up.

Expose mechanism for customizing container-level configuration

Is your feature request related to a problem? Please describe.
Today we have a general assumption of things like host, port, and lack of configuration on other container-specific-things. Implementing something like a vault resource requires more configuration, which is hard to reliably predict will work in all circumstances.

Furthermore, we make basic assumptions about things like the container version, which would be the first thing I'd personally want to customize to match our deployed resource version.

Describe the solution you'd like
Option 1:
Expose a pmr_<resources>_config fixture for each fixture type which describes resource-level configuration for that resource, for example:

@pytest.fixture('session')
def pmr_postgres_config():
    return {'host': 'localhost', 'image': 'postgres:tag', ...}

We can start with a dict, but it might be advantageous to preemptively use a class of some type. potentially resource-specific. It probably makes sense to use this as an override to the default on a per-config-option basis.

Additionally, this would make making use of the config in tests for e.g. custom connections a lot more straightforward, compared to what we do today.

Option 1b:
same thing but a global pmr_config fixture. I think this might be objectively worse because it's far more janky when considering using it inside a test for a manual connection.

Option 2:
Accept PMR__<CONFIG_OPTION_NAME> (i.e. PMR_POSTGRES_PORT) for all the various configuration values that are available for each resource type. Presumably we'd do this dynamically and on-demand (rather than module-level, as is the case now), rather than having them all statically defined.

Option 3:
Do both.

Things like image (and other resource-specific settings) would tend to be static and system dependent. I wouldn't want to have to specific it in my env, as it ought to be tied to the version you're using at any given time for that project.

Things like port, or host are going to be more specific to the environment in which you're running them (CI, or just ephemeral port conflicts)

console-script for starting persistent container

Is your feature request related to a problem? Please describe.
It takes forever to start a docker container

Describe the solution you'd like
a pytest-mock-resources postgres/mongo which auto-runs the docker command you document in the readme.

Ideally this would block and you could ctrl+c it to stop

Drop python 2.7 support

Is your feature request related to a problem? Please describe.
It's a maintenance burden, and pytest itself only supports 2.7 up to 4.6.X.

We should maybe produce a final, minor release with 2.7 support so we can theoretically have a clean 1.x.Y version off which to backport things onto Y if it ever became important.

Async fixtures support

Is your feature request related to a problem? Please describe.
Sqlalchemy 1.4 & 2.0 were released with async support. The current fixtures produce synchronous engines so they can't be used with async code.

Describe the solution you'd like
Add support for async sqlalchemy. Maybe by adding a parameter such as async_ to the fixture functions

Describe alternatives you've considered
I haven't found any

Additional context
Here's a code snippet:

# returns an async engine
engine = create_postgres_fixture(Base, scope="session", async_=True)
# or
engine = create_async_postgres_fixture(Base, scope="session")

We might have to use pytest-asyncio

I'm already working on something. I'm willing to do a PR for this.

Getting errors when using `create_redis_fixture`

Describe the bug
I'm trying to use pytest-mock-resources for the first time to mock a Redis client. I've used the instructions at https://pytest-mock-resources.readthedocs.io/en/latest/redis.html as a guidance, but I'm running into errors. Not sure if they're caused by bugs, or if I'm doing something wrong with the API. Thanks in advance.

Environment

  • Host OS: macOS
  • Docker image if applicable: n/a
  • Python Version: 3.9.16
  • Virtualenv/Pyenv etc.. if applicable

To Reproduce
Steps to reproduce the behavior:

  1. Save the two snippets below as myapp.py and test_myapp.py
  2. Run python -m pip install pytest redis pytest-mock-resources to install the necessary dependencies
  3. Run python -m pytest

Expected behavior
The two tests β€” test_set_item and test_set_item2 β€” contained in test_myapp.py should succeed.

Actual Behavior
Both tests fail. The result is shown below.

FAILED test_myapp.py::test_set_item - AttributeError: 'function' object has no attribute 'hset'
ERROR test_myapp.py::test_set_item2 - AttributeError: 'function' object has no attribute 'pmr_credentials'

Additional context
Contents of myapp.py

from redis import Redis

REDIS_CLIENT = Redis(host='localhost', port=6379, db=0)
REDIS_KEY = "some:key"

def set_item(idx, value):
    REDIS_CLIENT.hset(REDIS_KEY, idx, value)

Contents of test_myapp.py

from unittest.mock import patch

import pytest
from pytest_mock_resources import create_redis_fixture
from redis import Redis

from myapp import set_item, REDIS_KEY


@pytest.fixture
def redis_client_fixt():
    yield create_redis_fixture()


def test_set_item(redis_client_fixt):
    with patch('myapp.REDIS_CLIENT', redis_client_fixt) as redis_client:
        idx = 1234
        value = "some_value"
        set_item(idx, value)

        assert redis_client.hget(REDIS_KEY, idx) == value


@pytest.fixture
def redis_client_fixt2():
    redis = create_redis_fixture()
    yield Redis(**redis.pmr_credentials.as_redis_kwargs())


def test_set_item2(redis_client_fixt2):
    with patch('myapp.REDIS_CLIENT', redis_client_fixt2) as redis_client:
        idx = 1234
        value = "some_value"
        set_item(idx, value)

        assert redis_client.hget(REDIS_KEY, idx) == value

AttributeError: module 'sqlalchemy.dialects.sqlite.base' has no attribute 'JSON' using SQLAlchemy 1.2

Describe the bug
Description of what the bug is.

Environment

  • Host OS: Mac OS 10.15.7
  • Docker image if applicable
  • Python Version: 3.6.7
  • Virtualenv/Pyenv etc.. if applicable

To Reproduce
Steps to reproduce the behavior:

  1. Use a project which business SQLAlchemy 1.2.19
  2. Install latest pytest mock resources
  3. create a simple test that simply assert True == True with no DB connection init.
  4. run pytest
  5. see error: AttributeError: module 'sqlalchemy.dialects.sqlite.base' has no attribute 'JSON'

Expected behavior
the test passes

Actual Behavior

  File "/path/to/.venv/lib/python3.6/site-packages/pytest_mock_resources/fixture/database/__init__.py", line 4, in <module>
    from pytest_mock_resources.fixture.database.relational import (
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "/path/to/.venv/lib/python3.6/site-packages/_pytest/assertion/rewrite.py", line 171, in exec_module
    exec(co, module.__dict__)
  File "/path/to/.venv/lib/python3.6/site-packages/pytest_mock_resources/fixture/database/relational/__init__.py", line 5, in <module>
    from pytest_mock_resources.fixture.database.relational.sqlite import create_sqlite_fixture
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "/path/to/.venv/lib/python3.6/site-packages/_pytest/assertion/rewrite.py", line 171, in exec_module
    exec(co, module.__dict__)
  File "/path/to/.venv/lib/python3.6/site-packages/pytest_mock_resources/fixture/database/relational/sqlite.py", line 111, in <module>
    class PMRSQLiteDialect(SQLiteDialect_pysqlite):
  File "/path/to/.venv/lib/python3.6/site-packages/pytest_mock_resources/fixture/database/relational/sqlite.py", line 122, in PMRSQLiteDialect
    sqltypes.JSON: sqlite_base.JSON,
AttributeError: module 'sqlalchemy.dialects.sqlite.base' has no attribute 'JSON'

Additional context
This is not an issue if I upgrade SQLAlchemy to 1.3.0.

sqlite doesn't support Decimal objects natively

It emits this warning every time you try to persist a Decimal object. Options include:

  • silence the warning and live with the potential for rounding errors (which i think is fine honestly)
  • transparently swap Numeric column types that would accept decimals to string type columns, while making sure that it otherwise acts the same with filters, comparisons, inserts and whatnot.

Allow defining run_args

Is your feature request related to a problem? Please describe.
Defining custom run_args / run_kwargs for a container would be nice.

I've ran into some issues with using timescale/timescaledb-ha (specifically timescale/timescaledb-docker-ha#366) and since the problem is that I can't work around that issue with environment variables, maybe instead adding run parameters here could help.

Describe the solution you'd like
DockerContainerConfig.run_kwargs / run_args to be added and used in wait_for_container() to customise the container's run command.

Describe alternatives you've considered
The problem was only intermittent when using psychopg2 to set up the timescale container, so short term I think I'll go back to using that (but thanks again for #183)

I would also hope that TimescaleDB might bring their -ha image into line with their base image, but that's probably a slow process...

If I had time I'd submit a PR, so if my workaround works with 2.6.10 or I rollback to 2.6.7 I'll carry on for now but would like to submit a PR soon-ish (most likely mid-late March with my current schedule)

Error opening connection from redshift_connector

Describe the bug
When trying to connect to pmr redshift container DB with redshift_connector , I get the following error.

ProgrammingError({'S': 'FATAL', 'V': 'FATAL', 'C': '42704', 'M': 'unrecognized configuration parameter "client_protocol_version"', 'F': 'guc.c', 'L': '5858', 'R': 'set_config_option'})

Environment

  • Host OS: Window running Ubuntu in WSL
  • Docker image if applicable: postgres:9.6.10-alpine
  • Python Version: 3.9.12
  • Virtualenv/Pyenv etc.. if applicable: virtualenv

To Reproduce
Using redshift_connector when redshift_connector.connect below is called from a test I get the error.

import pytest
from pytest_mock_resources import create_redshift_fixture
import redshift_connector

redshift_fixture = create_redshift_fixture()


@pytest.fixture()
def connection_fixture(monkeypatch, redshift_fixture):
    def get_connection():
        credentials = redshift_fixture.pmr_credentials
        connection = redshift_connector.connect(
            application_name="My App (Tests)",
            host=credentials.host,
            port=credentials.port,
            ssl=False,
            database=credentials.database,
            user=credentials.username,
            password=credentials.password,
        )
    return get_connection

Expected behavior
I would like to be able to connect to the Redshift fixture DB with redshift_connector.

Actual Behavior

/home/mlambert/venvs/myapp/lib/python3.9/site-packages/requests/sessions.py:600: in get
    return self.request("GET", url, **kwargs)
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/starlette/testclient.py:476: in request
    return super().request(
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/requests/sessions.py:587: in request
    resp = self.send(prep, **send_kwargs)
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/requests/sessions.py:701: in send
    r = adapter.send(request, **kwargs)
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/starlette/testclient.py:270: in send
    raise exc
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/starlette/testclient.py:267: in send
    portal.call(self.app, scope, receive, send)
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/anyio/from_thread.py:283: in call
    return cast(T_Retval, self.start_task_soon(func, *args).result())
/home/mlambert/.pyenv/versions/3.9.12/lib/python3.9/concurrent/futures/_base.py:446: in result
    return self.__get_result()
/home/mlambert/.pyenv/versions/3.9.12/lib/python3.9/concurrent/futures/_base.py:391: in __get_result
    raise self._exception
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/anyio/from_thread.py:219: in _call_func
    retval = await retval
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/fastapi/applications.py:269: in __call__
    await super().__call__(scope, receive, send)
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/starlette/applications.py:124: in __call__
    await self.middleware_stack(scope, receive, send)
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/starlette/middleware/errors.py:184: in __call__
    raise exc
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/starlette/middleware/errors.py:162: in __call__
    await self.app(scope, receive, _send)
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/starlette/exceptions.py:93: in __call__
    raise exc
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/starlette/exceptions.py:82: in __call__
    await self.app(scope, receive, sender)
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py:21: in __call__
    raise e
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py:18: in __call__
    await self.app(scope, receive, send)
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/starlette/routing.py:670: in __call__
    await route.handle(scope, receive, send)
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/starlette/routing.py:266: in handle
    await self.app(scope, receive, send)
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/starlette/routing.py:65: in app
    response = await func(request)
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/fastapi/routing.py:217: in app
    solved_result = await solve_dependencies(
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/fastapi/dependencies/utils.py:525: in solve_dependencies
    solved = await solve_generator(
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/fastapi/dependencies/utils.py:449: in solve_generator
    return await stack.enter_async_context(cm)
/home/mlambert/.pyenv/versions/3.9.12/lib/python3.9/contextlib.py:575: in enter_async_context
    result = await _cm_type.__aenter__(cm)
/home/mlambert/.pyenv/versions/3.9.12/lib/python3.9/contextlib.py:181: in __aenter__
    return await self.gen.__anext__()
/home/mlambert/venvs/myapp/lib/python3.9/site-packages/fastapi/concurrency.py:30: in contextmanager_in_threadpool
    raise e
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <contextlib._GeneratorContextManager object at 0x7f1debbb5610>
typ = <class 'redshift_connector.error.ProgrammingError'>
value = ProgrammingError({'S': 'FATAL', 'V': 'FATAL', 'C': '42704', 'M': 'unrecognized configuration parameter "client_protocol_version"', 'F': 'guc.c', 'L': '5858', 'R': 'set_config_option'})
traceback = None

    def __exit__(self, typ, value, traceback):
        if typ is None:
            try:
                next(self.gen)
            except StopIteration:
                return False
            else:
                raise RuntimeError("generator didn't stop")
        else:
            if value is None:
                # Need to force instantiation so we can reliably
                # tell if we get the same exception back
                value = typ()
            try:
>               self.gen.throw(typ, value, traceback)
E               redshift_connector.error.ProgrammingError: {'S': 'FATAL', 'V': 'FATAL', 'C': '42704', 'M': 'unrecognized configuration parameter "client_protocol_version"', 'F': 'guc.c', 'L': '5858', 'R': 'set_config_option'}

Additional context

Hardcoded config for MySQL

container/mysql.py has a hardcoded config dict and it's being used immediately by creating _mysql_container = ... object. This makes impossible to override these config settings, for example docker image.

Can use schemas with sqlite

Describe the bug

E sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "SCHEMA": syntax error [SQL: 'CREATE SCHEMA IF NOT EXISTS viacom'] (Background on this error at: http:/

To Reproduce

  • use the sqlite fixture
  • try to use a table which lives inside a schema

Expected behavior
it creates the schema for me!

sqlite loses timezone awareness

https://stackoverflow.com/questions/36730671/how-to-store-timezone-aware-timestamps-in-sqlite3-with-python

e.g. if you save a utc timezone'd datetime, when it comes back out, it will come out as a datetime object without a timezone. which means you get errors if you try to compare it to a timezone'd datetime.

Since you can't determine tz once its in the db anyways, i just request we use the adaptor from the above example, not to localize the timezone, but to add a tzinfo=timezone.utc to the vanilla datetime object which is returned

psycopg2 is secretly required

Describe the bug
depending on pytest-mock-resources requires you to also have psycopg2 in your environment. A previous PR (I think) removed the setup.py install requirement (because not every consumer will use pg); but we still perform imports of psycopg2 which cause those imports to fail.

Expected behavior
psycopg2 should remain an optional extra, but we should guard the imports such that it allows users to not have it installed.

Can't execute multiple tests in parallel

Is your feature request related to a problem? Please describe.
pytest -n 4

E       sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "pg_database_datname_index"
E       DETAIL:  Key (datname)=(pytest_mock_resource_db_386) already exists.
E        [SQL: 'CREATE DATABASE "pytest_mock_resource_db_386"'] (Background on this error at: http://sqlalche.me/e/gkpj)

Describe the solution you'd like
Starting here:
https://github.com/schireson/schireson-pytest-mock-resources/blob/master/src/pytest_mock_resources/fixture/database/relational/postgresql.py#L110

we shouldn't be setting the isolation level like that, firstly. but lastly the way that it's allocating the database names means we can't execute tests in parallel.

This is really unfortunate because otherwise the design seems really amenable to parallel use. This is a hard blocker to use in apis like flight-manager which would be really nice!

Support SQLAlchemy 1.4 future mode

Is your feature request related to a problem? Please describe.
SQLAlchemy 1.4 Engines and Sessions have a future mode which we're supposed to use to prepare for 2.0, but PMR doesn't support passing additional parameters to Engine.__init__, so we can't enable it for our tests.

Describe the solution you'd like
Either a future: bool parameter to MysqlConfig and PostgresConfig which sets future on Engines and Sessions, or an engine_options: dict parameter to allow arbitrary extra parameters for Engine.__init__ (and then we can use a sessionmaker to futurize the Session)? I'm happy to implement it, I just don't want to step on anyone's toes.

[Support] Tests hang when using fixture

I doubt this is a bug, and I'm just doing something wrong. I am not sure where else to comment for support. Sorry for the noise.

Describe the bug
Whenever I run a test that tries to use a fixture created with create_redshift_fixture, the test just hangs. When I debug, I see create_redshift_fixture is executed and returns and hangs shortly after that. I never see a container start up until I kill the test session, then I see the container running.

Environment

  • Host OS: Ubuntu
  • Docker image if applicable: postgres:9.6.10-alpine
  • Python Version: 3.9.12
  • Virtualenv/Pyenv etc.. if applicable: venv

To Reproduce

conftest.py

import pytest
from pytest_mock_resources import create_redshift_fixture

redshift_fixture = create_redshift_fixture()

test_mymoudle.py

def test_func(redshift_fixture):
    # I can duplicate trying to access the fixture or just an empty test as shown.
    assert True == True

Expected behavior
Test not to hang.

Actual Behavior
Test hangs

Additional context

Add Redis the CLI

Is your feature request related to a problem? Please describe.
Currently, it's possible to run Redis using the CLI i.e pmr redis. I get the following error:

Traceback (most recent call last):
  File "/home/michael/.cache/pypoetry/virtualenvs/app-d-AWDS2Y-py3.9/bin/pmr", line 8, in <module>
    sys.exit(main())
  File "/home/michael/.cache/pypoetry/virtualenvs/app-d-AWDS2Y-py3.9/lib/python3.9/site-packages/pytest_mock_resources/cli.py", line 79, in main
    command = FixtureBase(fixture_base).command
  File "/usr/lib/python3.9/enum.py", line 384, in __call__
    return cls.__new__(cls, value)
  File "/usr/lib/python3.9/enum.py", line 702, in __new__
    raise ve_exc
ValueError: 'redis' is not a valid FixtureBase

Describe the solution you'd like
It would be nice if we could support Redis as well.

I can open up a PR for this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.