Git Product home page Git Product logo

dbutils's Introduction

DBUtils

DBUtils is a suite of tools providing solid, persistent and pooled connections to a database that can be used in all kinds of multi-threaded environments.

The suite supports DB-API 2 compliant database interfaces and the classic PyGreSQL interface.

The current version 3.1.0 of DBUtils supports Python versions 3.7 to 3.12.

Please have a look at the changelog, because there were some breaking changes in version 2.0.

The DBUtils home page can be found at https://webwareforpython.github.io/DBUtils/

dbutils's People

Contributors

cclauss avatar cito avatar leofiore avatar qizidog avatar sunpoet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbutils's Issues

why call stored procedure, Obtain output parameters sometimes None?

  • os : Ubuntu16.4,
  • python : 3.6,
  • pymysql : 0.9.2,
  • mysql : 5.7.23,
  • dbutils: 1.3,

PROCEDURE code is :

CREATE DEFINER=`root`@`%` PROCEDURE `P_TEST`(OUT aCode varchar(4), OUT aMsg 
    varchar(16), IN aAppName varchar(16))
COMMENT 'test'
BEGIN
    set aCode = '1';
    set aMsg = 'err_msg';
    select aAppName;
END

and python code is :

def executeProc(aProcName, aParams):
tmpDbConn = None
tmpCursor = None
try:
    tmpListData = list(aParams)
    tmpListData.insert(0, '')
    tmpListData.insert(0, '')
    aParams = tuple(tmpListData)
    print(aProcName, aParams)

    tmpDbConn = DBPools.connection()
    tmpCursor = DBPools.connection().cursor()
    tmpCursor.callproc(aProcName, aParams)
    tmpDatas1 = tmpCursor.fetchall()
    print(tmpDatas1)
    tmpCursor.execute('select @_%s_0, @_%s_1 ;' % (aProcName, aProcName))
    tmpDatas2 = tmpCursor.fetchall()
    print(tmpDatas2)
    code = tmpDatas2[0][0]
    msg = tmpDatas2[0][1]
    tmpCursor.close()
    tmpDbConn.close()
    return (code, msg, tmpDatas1)
except InternalError as e:
    print(e)
    return (sqlServerInternalError, all_code[sqlServerInternalError])
except ProgrammingError as e:
    print(e)
    return (sqlServerProgrammingError, all_code[sqlServerProgrammingError])
except InterfaceError as e:
    print(e)
    return (sqlServerConnectFail, all_code[sqlServerConnectFail])
except OperationalError as e:
    print(e)
    return (sqlServerInterfaceError, all_code[sqlServerInterfaceError])
except Exception as e:
    print(e)
    return (sqlServerException, all_code[sqlServerException])
finally:
    if tmpCursor:
        tmpCursor.close()
    if tmpDbConn:
        tmpDbConn.close()

if __name__ == "__main__":
    for i in range(100):
        executeProc('P_TEST', ('a'))

and test result is:

P_TEST ('', '', 'a')
(('a',),)
(('1', 'err_msg'),)
P_TEST ('', '', 'a')
(('a',),)
look here : ((None, None),)
P_TEST ('', '', 'a')
(('a',),)
look here : ((None, None),)
P_TEST ('', '', 'a')
(('a',),)
(('1', 'err_msg'),)
P_TEST ('', '', 'a')
(('a',),)
(('1', 'err_msg'),)

This is just a random occurrence. Where is the problem? Thanks!

Multiple questions...

  1. Is there a more complete documentation of the API?
  2. Can DBUtils manage (load balance) a clustered database? (can be as simple as passing in multiple db addresses and allocating connections round robin)
  3. How should I best declare PersistentDB (global? pass by argument?) to allow my threads to open and access connections?
  4. Is there any cleanup necessary on the thread or process level?

Thanks!

mssql failover and segmentation fault

Hi,
I meet a question, when i use this package to connnect mssql(pymyssql), when the server failover, my program will killed, and just have a message 'Segmentation fault'.
I use strace to track the error, and get messages like:
close(5) = 0 --- SIGSEGV {si_signo=SIGSEGV, si_code=SI_KERNEL, si_addr=0} --- +++ killed by SIGSEGV +++
null pointer?

seesion expire problem

I declared a flask service as the following shown to execute insert or update sql to mysql 5.7, however, About after 10 days , the service can't insert or update any data and not any exception thrown.

`@app.route('/data-sync/increment', methods=['POST'])
def increment_data_sync():
if not request.json or 'dest_db' not in request.json or 'sql' not in request.json:
abort(400)
else:
sql = request.json['sql']
dest_conn = dest_db_pool.connection()
try:
with dest_conn.cursor() as cursor:
print(sql)
cursor.execute(sql)
cursor.close()
except Exception:
traceback.print_exc()
return jsonify({'code': -1, 'msg': 'failed', 'data': None})
finally:
dest_conn.close()
return jsonify({'code': 0, 'msg': 'success', 'data': None})

if name == "main":
ConfigParser.load_config()
dest_conn_setting = {'host': ConfigParser.get_dest_db_host(), 'port': ConfigParser.get_dest_db_port(),
'user': ConfigParser.get_dest_db_usr(), 'passwd': ConfigParser.get_dest_db_passwd()}
dest_db_pool = PooledDB(creator=pymysql, mincached=2, maxcached=20, maxshared=20, maxconnections=20, ping=7,
blocking=False, maxusage=0, setsession=['SET AUTOCOMMIT = 1'], **dest_conn_setting)
app.run(host="0.0.0.0", port=8001, debug=True)`

any suggestion? thank u.

Connection won't reconnect if PostgreSQL server restarted

import random
import threading
import time
import traceback

import psycopg2
from dbutils.pooled_db import PooledDB

pool = PooledDB(
    psycopg2,
    # failures=(psycopg2.InterfaceError, psycopg2.OperationalError, psycopg2.InternalError),
    host='localhost',
)


def w():
    while True:
        try:
            with pool.connection() as db:
                with db.cursor() as cursor:
                    cursor.execute("select 1")
                    print(cursor.fetchall())
                db.commit()
        except Exception:
            traceback.print_exc()
        time.sleep(random.random())


for _ in range(10):
    t = threading.Thread(target=w)
    t.start()

Run this code. It prints lots of [(1,)]. Then restart PostgreSQL.
I got many psycopg2.InterfaceError: connection already closed error and it won't recover.

Maybe InterfaceError should be in the default value of failures.

PooledDB setsession argument does not work on new connection.

Create a pooledDB, get a connection , and set setsession argument to autocommit = 1, its ok.

BUT when the connection is Not Available, pooledDB try to get new connection, the PooledDB argument setsession is not working, it will be change to autocommit 1 to 0.

Here an example code.

from DBUtils.PooledDB import PooledDB
import pymysql
pool = PooledDB(creator=pymysql,
                   maxconnections=4,  
                   mincached=2, 
                   maxcached=0, 
                   blocking=True,  
                   maxusage=None,  
                   setsession=['SET AUTOCOMMIT = 1'],    # Pay attention to this
                   host='127.0.0.1',
                   port=3306,
                   user='root',
                   password='pwd',
                   database='alerter',
                   charset='utf8'
                   )

insert_sql = "INSERT INTO `alerter`.`al_play_log` (`src_type`, `src_ip`, `led_style`, `led_color`, `led_flag`, `text`, `play_time`, `tone`) " \
             "VALUES (4, '192.168.0.210', 15, '[\"#00FFFF\"]', '[0.05, 1]', 'content', 0, NULL)"

conn = pool.connection()
cursor = conn.cursor()
ret = cursor.execute(insert_sql)
conn.close()

print('new connection create,check log, then use mysql kill command to kill current connect.')
input()
conn = pool.connection()
cursor = conn.cursor()
ret = cursor.execute(insert_sql)
conn.close()

Execute the code according to the following steps

  1. run the code, Pool init and insert sql.
  2. The code is paused, at this point, goto mysql use show processlist cmd get current thread id, and use kill cmd to kill it.
  3. then return to code, Press any key to continue code execution.
  4. then check mysql general log.

It can be found that autocommit does not take effect on new connecton.

This is the log of my database

2022-05-16T03:13:48.758866Z	  207 Connect	root@localhost on alerter using TCP/IP
2022-05-16T03:13:48.760342Z	  207 Query	SET AUTOCOMMIT = 0
2022-05-16T03:13:48.761637Z	  207 Query	SET AUTOCOMMIT = 1
2022-05-16T03:13:48.766279Z	  208 Connect	root@localhost on alerter using TCP/IP
2022-05-16T03:13:48.767477Z	  208 Query	SET AUTOCOMMIT = 0
2022-05-16T03:13:48.768686Z	  208 Query	SET AUTOCOMMIT = 1
2022-05-16T03:13:48.769913Z	  208 Query	ROLLBACK
2022-05-16T03:13:48.770811Z	  207 Query	ROLLBACK
2022-05-16T03:13:48.772879Z	  208 Query	INSERT INTO `alerter`.`al_play_log` (`src_type`, `src_ip`, `led_style`, `led_color`, `led_flag`, `text`, `play_time`, `tone`) VALUES (4, '192.168.0.210', 15, '["#00FFFF"]', '[0.05, 1]', 'content', 0, NULL)
2022-05-16T03:13:48.783426Z	  208 Query	ROLLBACK
2022-05-16T03:13:52.059120Z	  192 Query	show processlist
2022-05-16T03:13:59.223979Z	  192 Query	kill 207
2022-05-16T03:14:02.462124Z	  192 Query	kill 208
2022-05-16T03:14:13.903155Z	  209 Connect	root@localhost on alerter using TCP/IP
***2022-05-16T03:14:13.904506Z	  209 Query	SET AUTOCOMMIT = 0***
2022-05-16T03:14:13.906474Z	  209 Query	INSERT INTO `alerter`.`al_play_log` (`src_type`, `src_ip`, `led_style`, `led_color`, `led_flag`, `text`, `play_time`, `tone`) VALUES (4, '192.168.0.210', 15, '["#00FFFF"]', '[0.05, 1]', 'content', 0, NULL)
2022-05-16T03:14:13.908776Z	  209 Query	ROLLBACK
2022-05-16T03:14:13.956031Z	  209 Quit	

DBUtils

First of all, thanks to the author's contribution, this is a good connection tool. I have a question about how to connect to multiple databases. I'm using a master slave database now

Unnecessary code about _wait_lock() when shareable is true

def connection(self, shareable=True):
    if shareable and self._maxshared:
        self._lock.acquire()
        try:
            while (not self._shared_cache and self._maxconnections
                    and self._connections >= self._maxconnections):
                >>> self._wait_lock() <<<
    ....

The code self._wait_lock() won't execute, because _connections > 0 lead to _shared_cache not empty

can not reconnect to database when restart postgres service

Issue/Concerns:
OS:win10
python:3.6.5
postgres:9.5.15
dbutils: 1.3

set the max_connection_num to postgres to 5 and used pg_terminate_backend function of postgres in windows to kill 2 of the connections, the connection pool will not recreate 2 new connection for the pool and when restart postgres service all connection will be invalid in program, and will not reconnect.
connection creation code is :
PooledDB(psycopg2, maxconnections = 5,
mincached = 5, maxcached = 5,
maxshared = 5, application_name = configs.application_name_for_database_connection,
host = configs.database_host, port=configs.database_port, dbname = configs.database_name,
user = configs.database_user_name, password=configs.database_password)

DBUtils self._con.ping

My project uses
thrift : 0.9.3
mysql-python : 1.2.5
DBUtils : 1.2

DBPools = PooledDB(creator=MySQLdb, mincached=0, maxcached=100, maxusage=5, ...)

The database has returned content, however The server never returned. When I forcibly shut down the service,throw...

Traceback (most recent call last):
File "SJSrcApp.py", line 1934, in
tmpServer.serve()
File "/home/wxmemberapp/env/local/lib/python2.7/site-packages/thrift/server/TS erver.py", line 252, in serve
self.processor.process(iprot, oprot)
File "/home/wxmemberapp/env/local/lib/python2.7/site-packages/thrift/TMultiple xedProcessor.py", line 49, in process
return self.services[serviceName].process(StoredMessageProtocol(iprot, stand ardMessage), oprot)
File "/home/wxmemberapp/xhx/XHXWeight/XHXWeightSummary.py", line 2072, in proc ess
self._processMap[name](self, seqid, iprot, oprot)
File "/home/wxmemberapp/xhx/XHXWeight/XHXWeightSummary.py", line 2333, in proc ess_genPayOrder
result.success = self._handler.genPayOrder(args.aCardNo, args.aPayType, args .aMoney)
File "SJSrcApp.py", line 945, in genPayOrder
tmpDatas = DBUtilsEx.executeProc('p_xhx_mobilepay_genPayOrder', (aCardNo, or der_id, aPayType, aMoney))
File "/home/wxmemberapp/xhx/DBUtilsEx.py", line 69, in executeProc
tmpDbConn = DBPools.connection()
File "/home/wxmemberapp/env/local/lib/python2.7/site-packages/DBUtils/PooledDB .py", line 336, in connection
con._ping_check() # check connection
File "/home/wxmemberapp/env/local/lib/python2.7/site-packages/DBUtils/SteadyDB .py", line 337, in _ping_check
alive = self._con.ping()
KeyboardInterrupt

Where is the problem? is dbutils ? how to fix ?
Thank you very much!

Question: Role of mincached?

Hello, first of all, I am not a native English speaker, so my expression may not be clear.
I want to determine the role of mincached
In the parameter introduction it is like this,
mincached: the initial number of idle connections in the pool (the default of 0 means no connections are made at startup)

But there is this sentence in the introduction of pooleddb:
Besides the pool of shared connections, you can also set up a pool of at least mincached and at the most maxcached idle connections that will be used whenever a thread is requesting a dedicated database connection or the pool of shared connections is not yet full.

Will pooleddb keep the connection based on mincached?
Can this connection be observed using show processlist?
Thank you so much

PooledDbconnection concerns temptables, FlaskApp

Issue/Concerns:

  • Temptables tempdb.dbo.#temp_table_name created in a read request using PooledDbConnection, does not get deleted, after connection.close()
    Whats the concept of session & connection in PooledDB context ?
    Temptables with # as prefix created ideally should get deleted when session ends.
    Can someone help understand how the session and connection & transaction works in connection to pooledDB ?

  • Should PooledDB instance we stored as a global in flask Application ? Or will every request create a new pooledDb instance ?

  • Our MaxShared for pooledDb should match how many requests we aim to serve in our application at a given time. Let me know if im not understanding this correctly.
    Whats the goal of having maxCached ?

PooledDB connection does not work with `with` statement

How to reproduce:

from dbutils.pooled_db import PooledDB

pool = PooledDB(some_creator)

conn = pool.connection()
with conn:
    print(1)

Will throw an AttributeError in with statement:

AttributeError                            Traceback (most recent call last)
<ipython-input-33-1657fbc224a7> in <module>
----> 1 with conn:
      2     print(1)

AttributeError: __enter__

What's wrong:

It seems conn (PooledDedicatedDBConnection object) has no attribute __enter__.

But when I type

conn.__enter__

it gives

<bound method SteadyDBConnection.__enter__ of <dbutils.steady_db.SteadyDBConnection object at 0x7fafbc4dbe10>>

So conn.__enter__ do exist, but it comes from internal SteadyDBConnection object, not conn itself.

After digging into source code, I find the __enter__ attribute is proxied from conn._con object with __getattr__:

def __getattr__(self, name):
        """Proxy all members of the class."""
        if self._con:
            return getattr(self._con, name)
        else:
            raise InvalidConnection

So the reason of the AttributeError is that __getattr__ won't take effect in with statement. I Googled this problem and find this: Why doesn't getattr work with exit?. It seems like a designed feature that we can't proxy __enter__ and __exit__ with __getattr__ in with statement.

Furthermore, even though __getattr__ works fine with __enter__ and __exit__, PooledDedicatedDBConnection shouldn't proxy them from SteadyDBConnection, because they act the wrong way.

The right effect of PooledDedicatedDBConnection.__exit__ should be return current connection to pool, like its close method. But what SteadyDBConnection's __enter__ and __exit__ methods do are very different.

A simple way to fix

A simple way to enable with statement on PooledDedicatedDBConnection object is putting __enter__ and __exit__ method into its definition, like this:

class PooledDedicatedDBConnection:
    ...

    def __enter__(self):
        return self
            
    def __exit__(self, *exc):
        self.close()

Or, we can use contextlib.closing as an alternative solution, like this:

from contextlib import closing

with closing(conn) as conn:
    print(1)

This works fine, but not as convenient as other DB API 2 connections, because of the import thing.

set timeout condition on connection

Hi there, I have a project which used to access oracle database using DBUtils with cx_Oracle. There are two kinds of operation one is for QUERY and the other one is for IUD(INSERT、UPDATE、DELETE).
Now I want to specify the number of milliseconds which I want to wait for each round-trip to the database.
As issue277 from cx_Oracle, I can make use of the call timeout feature available until I update client to 18.3.
After I update my client to 18.3 and twist code into :

pool = PooledDB(cx_Oracle,
                                                        threaded=True,
                                                        user="user_name",
                                                        password="password",
                                                        dsn="xxx.xxx.xxx.xxx/xxx",
                                                        mincached=min_cached,
                                                        maxcached=max_cached,
                                                        maxshared=max_shard,
                                                        maxconnections=max_con)
conn = db_pool.connection()
conn.callTimeout = time_out  # time_out=0.000000000000000000000001
cursor = conn.cursor()
try:
      cursor.execute(sql, values)
except Exception as e:   # expceting timeout error 
      logger.error(str(e), traceback.format_exc())

It seems callTimeout feature is useless, it does't take effect.

I'm wondering dose DBUtils support call timeout feature ?if yes then how?

Thread.isAlive is deprecated

The tests use the old spelling Thread.isAlive istead of Thread.is_alive (changed in Python 2.6). This should be fixed since it causes deprecation warnings in newer Python versions.

python3.6 setup.py install error.

byte-compiling C:\Python36\Lib\site-packages\DBUtils\Examples\DBUtilsExample.py to DBUtilsExample.cpython-36.pyc
File "C:\Python36\Lib\site-packages\DBUtils\Examples\DBUtilsExample.py", line 150
except self.dbapi.Error, error:
^
SyntaxError: invalid syntax

Comma Trigger

When use pymsql driver, the setsession params for PooledDB is not work after mysql server restart

I meet something strange in production. I have set setsession=["set autocommit=1"] in PooledDB and think it will be auto commit even if the connection has been reconnect.

But it seems the setsession do not work for pymysql in reconnect situation. Here is my demo.

# -*- coding: utf-8 -*-

import pymysql
import time
import traceback
from DBUtils.PooledDB import PooledDB
from pymysql import MySQLError

pymysql.install_as_MySQLdb()
con = None
pooledDB = None

def try_insert():
    i = 0
    while i < 60:
        print "============ {0} ============".format(i)
        con = pooledDB.connection()
        try:
            cursor = con.cursor(pymysql.cursors.DictCursor)
            if not cursor:
                print "cursor is {0}".format(cursor)
            select_sql = "insert into user2(name,age) values('zhang', 20)"
            ret_rows = cursor.execute(select_sql)
            print cursor._last_executed
            print "ret_rows is {0}".format(ret_rows)

        except MySQLError as e:
            print "MySQLError error: {0}".format(e)
            print traceback.format_exc()
        except Exception as e:
            print "Exception error: {0}".format(e)
            print traceback.format_exc()
    
        i = i +1 
        time.sleep(1)
        con.close()


if __name__ == "__main__":
    db_conf = {'user':'root','passwd':'test','host':'127.0.0.1','port':3306,'connect_timeout':5,'db':'test_dbutils'}

    pooledDB = PooledDB(
        creator=pymysql,  
        maxconnections=4,  
        mincached=0,  
        maxcached=0,  
        maxshared=0,  
        blocking=True,  
        maxusage=None,  
        setsession=["set autocommit=1"],  
        ping=1,  
        reset=False,  
        **db_conf
    )
    
    try_insert()

When the demo run, try to restart mysql server, and after the application reconnects mysql successfully and then get no error return, but, actually, it will not insert to mysql for rest sql. After debug the source code I think the reason it that in this situation it reconnects by pymysql directly not by dbutils.

I add some debug code in SteadyDBConnection to help me understand the process.

    def _ping_check(self, ping=1, reconnect=True):
        """Check whether the connection is still alive using ping().

        If the the underlying connection is not active and the ping
        parameter is set accordingly, the connection will be recreated
        unless the connection is currently inside a transaction.
        """
        if ping & self._ping:
            try: # if possible, ping the connection
                my_reconnect=True
                print "try to ping by pymysql(reconnect={0})".format(my_reconnect)
                alive = self._con.ping(reconnect=my_reconnect)
                # alive = self._con.ping()
            except (AttributeError, IndexError, TypeError, ValueError):
                print "ping method is not available"
                self._ping = 0 # ping() is not available
                alive = None
                reconnect = False
            except Exception,e :
                print "try to ping by pymysql(reconnect={0}) fail".format(my_reconnect)
                alive = False
            else:
                if alive is None:
                    alive = True
                if alive:
                    reconnect = False
                print "try to ping by pymysql(reconnect={0}) success".format(my_reconnect)
            if reconnect and not self._transaction:
                try: # try to reopen the connection
                    print "try to reconnect by dbutils"
                    con = self._create()
                except Exception:
                    print "try to reconnect by dbutils fail"
                    pass
                else:
                    print "try to reconnect by dbutils success"
                    self._close()
                    self._store(con)
                    alive = True
            return alive

For pymysql, its ping method has default reconnect behavior, and the my_reconnect variable addde for debug always will be true. It seems that dbutils assumed ping will not reconnect. So the setsession parameter will not apply to pymysql reconnect situation. Should setsession used in this way? Is it possible to add compatible parameter for driver which has reconnect parameter in its ping method?

Actually, I found another way to solve my promble by adding autocommit in db_con, like

    db_conf = {'user':'root','passwd':'test','host':'127.0.0.1','port':3306,'connect_timeout':5,'db':'test_dbutils',"autocommit":True}

My local python and package version:
dbutils : 1.1
pymysql : 0.9.3
Python : 2.7.13

TCP connection disconnected problem

Hi,
At first thanks for your team for providing this useful database tool.
I have a problem in a specific situation:

  • I run my mysql management code in a docker container in a docker platform.
  • This platform has a feature: when the idle time of a TCP connection in a running containner exceeds 240 seconds, it will be cut.
  • This problem makes me unable to use PooledDB's database pool to access my mysql database. (when I take a connection from sql pool and execute a sql statement, I get a timeout error)

I wonder if there is any PooledDB interface can solve this problem.

environment:
python3.9+DBUtils2.0.2+pymysql0.10.0

Thanks.

SteadyDBCursor is not iterable

SteadyDBCursor cannot be iterated, while pymysql's cursor does support iterating.

Is there any good reason why iteration is not implemented, or was this simply forgotten?

Pymysql seems to implement iteration like this. Could something like this be added?

    def __iter__(self):
        return iter(self.fetchone, None)

Transactions getting rolled back, even after committing in PooledDb connection

Creating two connection pools engines:-

  1. For where all the transactions on the connection will be autocommitted
  2. Where all the transactions on various connections will be be manually committed.
    Creator used here is pyodbc.
    Even after not sending the "autocommit", the transactions that are manually committed while writing to DB are getting rolled back when closing the application.
    Below mentioned code pattern was followed taking into consideration issue with pyodbc mentioned here https://github.com/mkleehammer/pyodbc/wiki/Database-Transaction-Management
    But seems like the DbUtils PooledDB connection isnt sending right information to pyodbc.
if autocommit_flag:    
                args: dict = {
                "autocommit": True,
                "Driver": "{ODBC Driver 11 for SQL Server}",
                "Server": xxxxx,
                "Database": xxxxxxx,
               "Uid": xxxxxxxx,
               "Pwd": xxxxxxx,
            }
            self._connection_pools_for_transactions_autocommitted[dbcode] = PooledDB(
                creator=pyodbc,
                mincached=1,
                maxcached=5,
                maxshared=5,
                maxusage=0,
                **args,
            )
        else:
          args: dict = {
                "Driver": "{ODBC Driver 11 for SQL Server}",
                "Server": xxxxx,
                "Database": xxxxxxx,
               "Uid": xxxxxxxx,
               "Pwd": xxxxxxx,
            }
            self._connection_pools_for_transactions_explicitly_committed[dbcode] = PooledDB(
                creator=pyodbc,
                mincached=1,
                maxcached=5,
                maxshared=5,
                maxusage=0,
                **args,
            )
def write(
        self,
        dbcode: str,
        sql: str,
        commit: bool = False,
        dedicated: bool = False,
        quiet: bool = False,
    ) -> bool:
        """Wraps the connect and execute steps for DB querys. Executes only one statement at a time

            Args:
                dbcode (str)               : The Database to run this query on
                sql (str)                  : The SQL query we want to execute
                commit (bool, optional)    : If we want to explicitly commit after execution
                dedicated (bool, optional) : If we want a seperate non-shared connection for executing the query
                quiet (bool, optional)     : If we should suppress exceptions
        """

        results: bool = False

        with self.get_cursor(dbcode, dedicated) as (cursor, connection):
            try:

                # begin transaction
                if commit:
                    cursor.execute("BEGIN TRANSACTION")

                # execute sql
                cursor.execute(sql)

                # determine if rows were affected
                if cursor.rowcount > 0:
                    results = True

            except pyodbc.Error as e:
                # rollback execution in response to error
                connection.rollback()
               if not quiet:
                    raise

            finally:

                # end transaction
                if commit:
                    connection.commit()

        return results
def _get_connection(self, dbcode: str, autocommit_flag: bool = True, dedicated_connection: bool = False):
        """Gets a connection from the pool of connections created for the specific database server

            Args:
                dbcode (str)                          : The database server whose connection is requested
                autocommit_flag (bool) : This flag determines if pyodbc connection handles autocommit of sql statements
                dedicated_connection (bool, optional) : Indicates if dedicated (non shared) connection for autocommited transactions
                                             is required for executing query

            Returns:
                object : A Pooled Shared DB Connection object

        """
        if autocommit_flag:
            if dbcode not in self._connection_pools_for_transactions_autocommitted:
                connection_creator = pyodbc
                connection_creator.pooling = False
                self._create_connection_pool(dbcode, connection_creator, autocommit_flag)

            pool = self._connection_pools_for_transactions_autocommitted[dbcode]
            return pool.dedicated_connection() if dedicated_connection else pool.connection()
        else:
            if dbcode not in self._connection_pools_for_transactions_explicitly_committed:
                connection_creator = pyodbc
                connection_creator.pooling = False
                self._create_connection_pool(dbcode, connection_creator, autocommit_flag)

            pool = self._connection_pools_for_transactions_explicitly_committed[dbcode]
            return pool.dedicated_connection()

AttributeError using cause by missing __exit__ function

Using DBUtils.PooledDB on python3.5, tested with PyGreSQL, psycopg2cffi, and mysqlclient.

If i try to define a cursor using the with statement, the library causes an exception. For example:

pool = PooledDB(pgdb, 100, [connection params])

with pool.cursor() as cur:
      cur.execute("select 1")

this code generates an AttributeError: __exit__ Exception. This happens using any database connector, so is not related to the underlying implementation of the external libraries. The solution is to rewrite the code as

pool = PooledDB(pgdb, 100, [connection params])

cur = pool.cursor():
cur.execute("select 1")
cur.close()

but this makes the refactoring of pre-existent code not so immediate.

Using use_pure = False causes error during mysql connection creation

In current version use_pure argument causes error

File "/Users/tdv78/Library/Mobile Documents/comappleCloudDocs/Programming/progv3/progv3/venv/lib/python3.9/site-packages/mysql/connector/connection_cext.py", line 246, in _open_connection
self._cmysql.connect(**cnx_kwargs)
TypeError: function takes at most 20 keyword arguments (25 given)

use_pure = True fixes the problem, but it is much slower than use_pure. Previous version was ok with use_pure = False

Customized connection parametrization

Hi,

In some cases it is required to set additional parameters to a connection after it has been created.
E.g. for pyodbc things like: con.setdecoding(pyodbc.SQL_WCHAR, encoding='utf-8')
I don't see any hints in the documentation how to do this.

regards,
Anton.

Use pyproject.toml instead of setup.py

As suggested by @cclauss: setup.py is now deprecated so we should run setuptools-py2cfg and then ini2toml to migrate everything to pyproject.toml.

We can then use hatch or pdm as build tool instead of setuptools.

Lots of errors when using PooledDB together with uWSGI, Flask and pymysql

Hi there!
Since I've been using the library, I've noticed a strange error, which unfortunately doesn't occur reproducibly either. I share my observations here, maybe you have an idea.

relevant part of the stack trace

  File "/opt/maddox/./modules/core/user_handler/user_handler.py", line 131, in get_user_by_id
    return self._make_user_from_data_row(data_row=cursor.fetchone())
  File "/opt/maddox/./modules/core/user_handler/user_handler.py", line 509, in _make_user_from_data_row
    user_id = data_row['user_id']
KeyError: 'user_id'

relevant code section

    def get_user_by_id(self, user_id: str, cursor: DictCursor) -> User:
        result_count = cursor.execute(
            query=Sql.GET_USER_BY_ID,
            args=(user_id,),
        )
        if result_count != 1:
            raise MaddoxException(msg=f'User with id "{user_id}" not found.')

        return self._make_user_from_data_row(data_row=cursor.fetchone())

    def _make_user_from_data_row(self, data_row: Dict[str, Any]) -> User:
        user_id = data_row['user_id']  # KeyError is raised here
        role = Role(data_row['role_id'])
        # lots of other stuff

Since _make_user_from_data_row was called like in the stacktrace above, it follows that result_count must have been equal to 1. But this is a contradiction to the KeyError which means, that are no rows to fetch.
Do you have an idea what could go wrong here?

Still got OperationalError when using PersistentDB

conn = PersistentDB(
    mysql,
    ping=0,  # ping may cause segmentation fault
    host=os.getenv("VS_DB_HOST"),
    user=os.getenv("VS_DB_USER"),
    passwd=os.getenv("VS_DB_PASSWORD"),
    db=os.getenv("VS_DB_NAME"),
    charset="utf8mb4",
)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.