webwareforpython / dbutils Goto Github PK
View Code? Open in Web Editor NEWDatabase connections for multi-threaded environments
License: MIT License
Database connections for multi-threaded environments
License: MIT License
First of all, thanks to the author's contribution, this is a good connection tool. I have a question about how to connect to multiple databases. I'm using a master slave database now
Creating two connection pools engines:-
autocommitted
if autocommit_flag:
args: dict = {
"autocommit": True,
"Driver": "{ODBC Driver 11 for SQL Server}",
"Server": xxxxx,
"Database": xxxxxxx,
"Uid": xxxxxxxx,
"Pwd": xxxxxxx,
}
self._connection_pools_for_transactions_autocommitted[dbcode] = PooledDB(
creator=pyodbc,
mincached=1,
maxcached=5,
maxshared=5,
maxusage=0,
**args,
)
else:
args: dict = {
"Driver": "{ODBC Driver 11 for SQL Server}",
"Server": xxxxx,
"Database": xxxxxxx,
"Uid": xxxxxxxx,
"Pwd": xxxxxxx,
}
self._connection_pools_for_transactions_explicitly_committed[dbcode] = PooledDB(
creator=pyodbc,
mincached=1,
maxcached=5,
maxshared=5,
maxusage=0,
**args,
)
def write(
self,
dbcode: str,
sql: str,
commit: bool = False,
dedicated: bool = False,
quiet: bool = False,
) -> bool:
"""Wraps the connect and execute steps for DB querys. Executes only one statement at a time
Args:
dbcode (str) : The Database to run this query on
sql (str) : The SQL query we want to execute
commit (bool, optional) : If we want to explicitly commit after execution
dedicated (bool, optional) : If we want a seperate non-shared connection for executing the query
quiet (bool, optional) : If we should suppress exceptions
"""
results: bool = False
with self.get_cursor(dbcode, dedicated) as (cursor, connection):
try:
# begin transaction
if commit:
cursor.execute("BEGIN TRANSACTION")
# execute sql
cursor.execute(sql)
# determine if rows were affected
if cursor.rowcount > 0:
results = True
except pyodbc.Error as e:
# rollback execution in response to error
connection.rollback()
if not quiet:
raise
finally:
# end transaction
if commit:
connection.commit()
return results
def _get_connection(self, dbcode: str, autocommit_flag: bool = True, dedicated_connection: bool = False):
"""Gets a connection from the pool of connections created for the specific database server
Args:
dbcode (str) : The database server whose connection is requested
autocommit_flag (bool) : This flag determines if pyodbc connection handles autocommit of sql statements
dedicated_connection (bool, optional) : Indicates if dedicated (non shared) connection for autocommited transactions
is required for executing query
Returns:
object : A Pooled Shared DB Connection object
"""
if autocommit_flag:
if dbcode not in self._connection_pools_for_transactions_autocommitted:
connection_creator = pyodbc
connection_creator.pooling = False
self._create_connection_pool(dbcode, connection_creator, autocommit_flag)
pool = self._connection_pools_for_transactions_autocommitted[dbcode]
return pool.dedicated_connection() if dedicated_connection else pool.connection()
else:
if dbcode not in self._connection_pools_for_transactions_explicitly_committed:
connection_creator = pyodbc
connection_creator.pooling = False
self._create_connection_pool(dbcode, connection_creator, autocommit_flag)
pool = self._connection_pools_for_transactions_explicitly_committed[dbcode]
return pool.dedicated_connection()
Hi there, I have a project which used to access oracle database using DBUtils with cx_Oracle. There are two kinds of operation one is for QUERY and the other one is for IUD(INSERT、UPDATE、DELETE).
Now I want to specify the number of milliseconds which I want to wait for each round-trip to the database.
As issue277 from cx_Oracle, I can make use of the call timeout feature available until I update client to 18.3.
After I update my client to 18.3 and twist code into :
pool = PooledDB(cx_Oracle,
threaded=True,
user="user_name",
password="password",
dsn="xxx.xxx.xxx.xxx/xxx",
mincached=min_cached,
maxcached=max_cached,
maxshared=max_shard,
maxconnections=max_con)
conn = db_pool.connection()
conn.callTimeout = time_out # time_out=0.000000000000000000000001
cursor = conn.cursor()
try:
cursor.execute(sql, values)
except Exception as e: # expceting timeout error
logger.error(str(e), traceback.format_exc())
It seems callTimeout
feature is useless, it does't take effect.
I'm wondering dose DBUtils support call timeout feature ?if yes then how?
As suggested by @cclauss: setup.py is now deprecated so we should run setuptools-py2cfg and then ini2toml to migrate everything to pyproject.toml.
We can then use hatch or pdm as build tool instead of setuptools.
I declared a flask service as the following shown to execute insert or update sql to mysql 5.7, however, About after 10 days , the service can't insert or update any data and not any exception thrown.
`@app.route('/data-sync/increment', methods=['POST'])
def increment_data_sync():
if not request.json or 'dest_db' not in request.json or 'sql' not in request.json:
abort(400)
else:
sql = request.json['sql']
dest_conn = dest_db_pool.connection()
try:
with dest_conn.cursor() as cursor:
print(sql)
cursor.execute(sql)
cursor.close()
except Exception:
traceback.print_exc()
return jsonify({'code': -1, 'msg': 'failed', 'data': None})
finally:
dest_conn.close()
return jsonify({'code': 0, 'msg': 'success', 'data': None})
if name == "main":
ConfigParser.load_config()
dest_conn_setting = {'host': ConfigParser.get_dest_db_host(), 'port': ConfigParser.get_dest_db_port(),
'user': ConfigParser.get_dest_db_usr(), 'passwd': ConfigParser.get_dest_db_passwd()}
dest_db_pool = PooledDB(creator=pymysql, mincached=2, maxcached=20, maxshared=20, maxconnections=20, ping=7,
blocking=False, maxusage=0, setsession=['SET AUTOCOMMIT = 1'], **dest_conn_setting)
app.run(host="0.0.0.0", port=8001, debug=True)`
any suggestion? thank u.
Hi,
I meet a question, when i use this package to connnect mssql(pymyssql), when the server failover, my program will killed, and just have a message 'Segmentation fault'.
I use strace to track the error, and get messages like:
close(5) = 0 --- SIGSEGV {si_signo=SIGSEGV, si_code=SI_KERNEL, si_addr=0} --- +++ killed by SIGSEGV +++
null pointer?
how can I get primary key when I executed insert sql ?
Hello,How can i get the conn object's db name?
Issue/Concerns:
OS:win10
python:3.6.5
postgres:9.5.15
dbutils: 1.3
set the max_connection_num to postgres to 5 and used pg_terminate_backend function of postgres in windows to kill 2 of the connections, the connection pool will not recreate 2 new connection for the pool and when restart postgres service all connection will be invalid in program, and will not reconnect.
connection creation code is :
PooledDB(psycopg2, maxconnections = 5,
mincached = 5, maxcached = 5,
maxshared = 5, application_name = configs.application_name_for_database_connection,
host = configs.database_host, port=configs.database_port, dbname = configs.database_name,
user = configs.database_user_name, password=configs.database_password)
Would it be possible to start building wheels for DBUtils for hosting on pypi?
Hi there!
Since I've been using the library, I've noticed a strange error, which unfortunately doesn't occur reproducibly either. I share my observations here, maybe you have an idea.
File "/opt/maddox/./modules/core/user_handler/user_handler.py", line 131, in get_user_by_id
return self._make_user_from_data_row(data_row=cursor.fetchone())
File "/opt/maddox/./modules/core/user_handler/user_handler.py", line 509, in _make_user_from_data_row
user_id = data_row['user_id']
KeyError: 'user_id'
def get_user_by_id(self, user_id: str, cursor: DictCursor) -> User:
result_count = cursor.execute(
query=Sql.GET_USER_BY_ID,
args=(user_id,),
)
if result_count != 1:
raise MaddoxException(msg=f'User with id "{user_id}" not found.')
return self._make_user_from_data_row(data_row=cursor.fetchone())
def _make_user_from_data_row(self, data_row: Dict[str, Any]) -> User:
user_id = data_row['user_id'] # KeyError is raised here
role = Role(data_row['role_id'])
# lots of other stuff
Since _make_user_from_data_row
was called like in the stacktrace above, it follows that result_count
must have been equal to 1
. But this is a contradiction to the KeyError
which means, that are no rows to fetch.
Do you have an idea what could go wrong here?
conn = PersistentDB(
mysql,
ping=0, # ping may cause segmentation fault
host=os.getenv("VS_DB_HOST"),
user=os.getenv("VS_DB_USER"),
passwd=os.getenv("VS_DB_PASSWORD"),
db=os.getenv("VS_DB_NAME"),
charset="utf8mb4",
)
byte-compiling C:\Python36\Lib\site-packages\DBUtils\Examples\DBUtilsExample.py to DBUtilsExample.cpython-36.pyc
File "C:\Python36\Lib\site-packages\DBUtils\Examples\DBUtilsExample.py", line 150
except self.dbapi.Error, error:
^
SyntaxError: invalid syntax
Comma Trigger
1.x
from DBUtils.PooledDB import PooledDB
2.x
from dbutils.pooled_db import PooledDB
I hope that the module name of the imported module can be compatible with the 1.x version in the 2.x version, and add a warning message when using the compatibility import.
Thx : )
from dbutils.pooled_db import PooledDB
pool = PooledDB(some_creator)
conn = pool.connection()
with conn:
print(1)
Will throw an AttributeError
in with
statement:
AttributeError Traceback (most recent call last)
<ipython-input-33-1657fbc224a7> in <module>
----> 1 with conn:
2 print(1)
AttributeError: __enter__
It seems conn
(PooledDedicatedDBConnection
object) has no attribute __enter__
.
But when I type
conn.__enter__
it gives
<bound method SteadyDBConnection.__enter__ of <dbutils.steady_db.SteadyDBConnection object at 0x7fafbc4dbe10>>
So conn.__enter__
do exist, but it comes from internal SteadyDBConnection
object, not conn
itself.
After digging into source code, I find the __enter__
attribute is proxied from conn._con
object with __getattr__
:
def __getattr__(self, name):
"""Proxy all members of the class."""
if self._con:
return getattr(self._con, name)
else:
raise InvalidConnection
So the reason of the AttributeError
is that __getattr__
won't take effect in with
statement. I Googled this problem and find this: Why doesn't getattr work with exit?. It seems like a designed feature that we can't proxy __enter__
and __exit__
with __getattr__
in with
statement.
Furthermore, even though __getattr__
works fine with __enter__
and __exit__
, PooledDedicatedDBConnection
shouldn't proxy them from SteadyDBConnection
, because they act the wrong way.
The right effect of PooledDedicatedDBConnection.__exit__
should be return current connection to pool, like its close
method. But what SteadyDBConnection
's __enter__
and __exit__
methods do are very different.
A simple way to enable with
statement on PooledDedicatedDBConnection
object is putting __enter__
and __exit__
method into its definition, like this:
class PooledDedicatedDBConnection:
...
def __enter__(self):
return self
def __exit__(self, *exc):
self.close()
Or, we can use contextlib.closing
as an alternative solution, like this:
from contextlib import closing
with closing(conn) as conn:
print(1)
This works fine, but not as convenient as other DB API 2 connections, because of the import
thing.
PROCEDURE code is :
CREATE DEFINER=`root`@`%` PROCEDURE `P_TEST`(OUT aCode varchar(4), OUT aMsg
varchar(16), IN aAppName varchar(16))
COMMENT 'test'
BEGIN
set aCode = '1';
set aMsg = 'err_msg';
select aAppName;
END
and python code is :
def executeProc(aProcName, aParams):
tmpDbConn = None
tmpCursor = None
try:
tmpListData = list(aParams)
tmpListData.insert(0, '')
tmpListData.insert(0, '')
aParams = tuple(tmpListData)
print(aProcName, aParams)
tmpDbConn = DBPools.connection()
tmpCursor = DBPools.connection().cursor()
tmpCursor.callproc(aProcName, aParams)
tmpDatas1 = tmpCursor.fetchall()
print(tmpDatas1)
tmpCursor.execute('select @_%s_0, @_%s_1 ;' % (aProcName, aProcName))
tmpDatas2 = tmpCursor.fetchall()
print(tmpDatas2)
code = tmpDatas2[0][0]
msg = tmpDatas2[0][1]
tmpCursor.close()
tmpDbConn.close()
return (code, msg, tmpDatas1)
except InternalError as e:
print(e)
return (sqlServerInternalError, all_code[sqlServerInternalError])
except ProgrammingError as e:
print(e)
return (sqlServerProgrammingError, all_code[sqlServerProgrammingError])
except InterfaceError as e:
print(e)
return (sqlServerConnectFail, all_code[sqlServerConnectFail])
except OperationalError as e:
print(e)
return (sqlServerInterfaceError, all_code[sqlServerInterfaceError])
except Exception as e:
print(e)
return (sqlServerException, all_code[sqlServerException])
finally:
if tmpCursor:
tmpCursor.close()
if tmpDbConn:
tmpDbConn.close()
if __name__ == "__main__":
for i in range(100):
executeProc('P_TEST', ('a'))
and test result is:
P_TEST ('', '', 'a')
(('a',),)
(('1', 'err_msg'),)
P_TEST ('', '', 'a')
(('a',),)
look here : ((None, None),)
P_TEST ('', '', 'a')
(('a',),)
look here : ((None, None),)
P_TEST ('', '', 'a')
(('a',),)
(('1', 'err_msg'),)
P_TEST ('', '', 'a')
(('a',),)
(('1', 'err_msg'),)
This is just a random occurrence. Where is the problem? Thanks!
Hi,
In some cases it is required to set additional parameters to a connection after it has been created.
E.g. for pyodbc things like: con.setdecoding(pyodbc.SQL_WCHAR, encoding='utf-8')
I don't see any hints in the documentation how to do this.
regards,
Anton.
Hi,
At first thanks for your team for providing this useful database tool.
I have a problem in a specific situation:
I wonder if there is any PooledDB interface can solve this problem.
environment:
python3.9+DBUtils2.0.2+pymysql0.10.0
Thanks.
def __getattr__(self, name):
"""Proxy all members of the class."""
if self._con:
return getattr(self._con, name)
else:
raise InvalidConnection
name = 'commit'
Add official support and testing for Python 3.8, desupport Python 2.6, maybe despupport Python 3.4 as well (does not even get security fixes any more).
I meet something strange in production. I have set setsession=["set autocommit=1"] in PooledDB and think it will be auto commit even if the connection has been reconnect.
But it seems the setsession do not work for pymysql in reconnect situation. Here is my demo.
# -*- coding: utf-8 -*-
import pymysql
import time
import traceback
from DBUtils.PooledDB import PooledDB
from pymysql import MySQLError
pymysql.install_as_MySQLdb()
con = None
pooledDB = None
def try_insert():
i = 0
while i < 60:
print "============ {0} ============".format(i)
con = pooledDB.connection()
try:
cursor = con.cursor(pymysql.cursors.DictCursor)
if not cursor:
print "cursor is {0}".format(cursor)
select_sql = "insert into user2(name,age) values('zhang', 20)"
ret_rows = cursor.execute(select_sql)
print cursor._last_executed
print "ret_rows is {0}".format(ret_rows)
except MySQLError as e:
print "MySQLError error: {0}".format(e)
print traceback.format_exc()
except Exception as e:
print "Exception error: {0}".format(e)
print traceback.format_exc()
i = i +1
time.sleep(1)
con.close()
if __name__ == "__main__":
db_conf = {'user':'root','passwd':'test','host':'127.0.0.1','port':3306,'connect_timeout':5,'db':'test_dbutils'}
pooledDB = PooledDB(
creator=pymysql,
maxconnections=4,
mincached=0,
maxcached=0,
maxshared=0,
blocking=True,
maxusage=None,
setsession=["set autocommit=1"],
ping=1,
reset=False,
**db_conf
)
try_insert()
When the demo run, try to restart mysql server, and after the application reconnects mysql successfully and then get no error return, but, actually, it will not insert to mysql for rest sql. After debug the source code I think the reason it that in this situation it reconnects by pymysql directly not by dbutils.
I add some debug code in SteadyDBConnection to help me understand the process.
def _ping_check(self, ping=1, reconnect=True):
"""Check whether the connection is still alive using ping().
If the the underlying connection is not active and the ping
parameter is set accordingly, the connection will be recreated
unless the connection is currently inside a transaction.
"""
if ping & self._ping:
try: # if possible, ping the connection
my_reconnect=True
print "try to ping by pymysql(reconnect={0})".format(my_reconnect)
alive = self._con.ping(reconnect=my_reconnect)
# alive = self._con.ping()
except (AttributeError, IndexError, TypeError, ValueError):
print "ping method is not available"
self._ping = 0 # ping() is not available
alive = None
reconnect = False
except Exception,e :
print "try to ping by pymysql(reconnect={0}) fail".format(my_reconnect)
alive = False
else:
if alive is None:
alive = True
if alive:
reconnect = False
print "try to ping by pymysql(reconnect={0}) success".format(my_reconnect)
if reconnect and not self._transaction:
try: # try to reopen the connection
print "try to reconnect by dbutils"
con = self._create()
except Exception:
print "try to reconnect by dbutils fail"
pass
else:
print "try to reconnect by dbutils success"
self._close()
self._store(con)
alive = True
return alive
For pymysql, its ping method has default reconnect behavior, and the my_reconnect variable addde for debug always will be true. It seems that dbutils assumed ping will not reconnect. So the setsession parameter will not apply to pymysql reconnect situation. Should setsession used in this way? Is it possible to add compatible parameter for driver which has reconnect parameter in its ping method?
Actually, I found another way to solve my promble by adding autocommit in db_con, like
db_conf = {'user':'root','passwd':'test','host':'127.0.0.1','port':3306,'connect_timeout':5,'db':'test_dbutils',"autocommit":True}
My local python and package version:
dbutils : 1.1
pymysql : 0.9.3
Python : 2.7.13
My project uses
thrift : 0.9.3
mysql-python : 1.2.5
DBUtils : 1.2
DBPools = PooledDB(creator=MySQLdb, mincached=0, maxcached=100, maxusage=5, ...)
The database has returned content, however The server never returned. When I forcibly shut down the service,throw...
Traceback (most recent call last):
File "SJSrcApp.py", line 1934, in
tmpServer.serve()
File "/home/wxmemberapp/env/local/lib/python2.7/site-packages/thrift/server/TS erver.py", line 252, in serve
self.processor.process(iprot, oprot)
File "/home/wxmemberapp/env/local/lib/python2.7/site-packages/thrift/TMultiple xedProcessor.py", line 49, in process
return self.services[serviceName].process(StoredMessageProtocol(iprot, stand ardMessage), oprot)
File "/home/wxmemberapp/xhx/XHXWeight/XHXWeightSummary.py", line 2072, in proc ess
self._processMap[name](self, seqid, iprot, oprot)
File "/home/wxmemberapp/xhx/XHXWeight/XHXWeightSummary.py", line 2333, in proc ess_genPayOrder
result.success = self._handler.genPayOrder(args.aCardNo, args.aPayType, args .aMoney)
File "SJSrcApp.py", line 945, in genPayOrder
tmpDatas = DBUtilsEx.executeProc('p_xhx_mobilepay_genPayOrder', (aCardNo, or der_id, aPayType, aMoney))
File "/home/wxmemberapp/xhx/DBUtilsEx.py", line 69, in executeProc
tmpDbConn = DBPools.connection()
File "/home/wxmemberapp/env/local/lib/python2.7/site-packages/DBUtils/PooledDB .py", line 336, in connection
con._ping_check() # check connection
File "/home/wxmemberapp/env/local/lib/python2.7/site-packages/DBUtils/SteadyDB .py", line 337, in _ping_check
alive = self._con.ping()
KeyboardInterrupt
Where is the problem? is dbutils ? how to fix ?
Thank you very much!
% ruff --select=FBT --statistics .
93 FBT003 Boolean positional value in function call
12 FBT002 Boolean default value in function definition
My sense is that it breaks too much to fix them now but it is an interesting insight.
import random
import threading
import time
import traceback
import psycopg2
from dbutils.pooled_db import PooledDB
pool = PooledDB(
psycopg2,
# failures=(psycopg2.InterfaceError, psycopg2.OperationalError, psycopg2.InternalError),
host='localhost',
)
def w():
while True:
try:
with pool.connection() as db:
with db.cursor() as cursor:
cursor.execute("select 1")
print(cursor.fetchall())
db.commit()
except Exception:
traceback.print_exc()
time.sleep(random.random())
for _ in range(10):
t = threading.Thread(target=w)
t.start()
Run this code. It prints lots of [(1,)]
. Then restart PostgreSQL.
I got many psycopg2.InterfaceError: connection already closed
error and it won't recover.
Maybe InterfaceError
should be in the default value of failures
.
Thanks!
The tests use the old spelling Thread.isAlive
istead of Thread.is_alive
(changed in Python 2.6). This should be fixed since it causes deprecation warnings in newer Python versions.
Issue/Concerns:
Temptables tempdb.dbo.#temp_table_name created in a read request using PooledDbConnection, does not get deleted, after connection.close()
Whats the concept of session & connection in PooledDB context ?
Temptables with #
as prefix created ideally should get deleted when session ends.
Can someone help understand how the session and connection & transaction works in connection to pooledDB ?
Should PooledDB instance we stored as a global in flask Application ? Or will every request create a new pooledDb instance ?
Our MaxShared for pooledDb should match how many requests we aim to serve in our application at a given time. Let me know if im not understanding this correctly.
Whats the goal of having maxCached ?
SteadyDBCursor cannot be iterated, while pymysql's cursor does support iterating.
Is there any good reason why iteration is not implemented, or was this simply forgotten?
Pymysql seems to implement iteration like this. Could something like this be added?
def __iter__(self):
return iter(self.fetchone, None)
Hello,
I've been trying to enable the CLIENT_MULTI_STATEMENTS flag with PooledDB using the PyMySQL driver. I've tried all sorts of syntax on connection with the "setsession" parameter with no luck, and at this point I'm wondering if it's even possible. Otherwise, I love this module, thank you for the work you do.
Hi! I just looked for a connection pool manager and came across two similar packages:
This one and "Databases" (https://github.com/encode/databases/).
I am wondering if there is any key difference, apart from the async feature?
In current version use_pure argument causes error
File "/Users/tdv78/Library/Mobile Documents/comappleCloudDocs/Programming/progv3/progv3/venv/lib/python3.9/site-packages/mysql/connector/connection_cext.py", line 246, in _open_connection
self._cmysql.connect(**cnx_kwargs)
TypeError: function takes at most 20 keyword arguments (25 given)
use_pure = True fixes the problem, but it is much slower than use_pure. Previous version was ok with use_pure = False
def connection(self, shareable=True):
if shareable and self._maxshared:
self._lock.acquire()
try:
while (not self._shared_cache and self._maxconnections
and self._connections >= self._maxconnections):
>>> self._wait_lock() <<<
....
The code self._wait_lock()
won't execute, because _connections > 0
lead to _shared_cache
not empty
Using DBUtils.PooledDB on python3.5, tested with PyGreSQL, psycopg2cffi, and mysqlclient.
If i try to define a cursor using the with
statement, the library causes an exception. For example:
pool = PooledDB(pgdb, 100, [connection params])
with pool.cursor() as cur:
cur.execute("select 1")
this code generates an AttributeError: __exit__
Exception. This happens using any database connector, so is not related to the underlying implementation of the external libraries. The solution is to rewrite the code as
pool = PooledDB(pgdb, 100, [connection params])
cur = pool.cursor():
cur.execute("select 1")
cur.close()
but this makes the refactoring of pre-existent code not so immediate.
https://pypi.org/project/DBUtils/ still can not support Python 3.8
Hello, first of all, I am not a native English speaker, so my expression may not be clear.
I want to determine the role of mincached
In the parameter introduction it is like this,
mincached: the initial number of idle connections in the pool (the default of 0 means no connections are made at startup)
But there is this sentence in the introduction of pooleddb:
Besides the pool of shared connections, you can also set up a pool of at least mincached and at the most maxcached idle connections that will be used whenever a thread is requesting a dedicated database connection or the pool of shared connections is not yet full.
Will pooleddb keep the connection based on mincached?
Can this connection be observed using show processlist?
Thank you so much
I'm trying to connect throw unix socket, in localhost using connection pool.
is that possible ?
Create a pooledDB, get a connection , and set setsession argument to autocommit = 1, its ok.
BUT when the connection is Not Available, pooledDB try to get new connection, the PooledDB argument setsession is not working, it will be change to autocommit 1 to 0.
Here an example code.
from DBUtils.PooledDB import PooledDB
import pymysql
pool = PooledDB(creator=pymysql,
maxconnections=4,
mincached=2,
maxcached=0,
blocking=True,
maxusage=None,
setsession=['SET AUTOCOMMIT = 1'], # Pay attention to this
host='127.0.0.1',
port=3306,
user='root',
password='pwd',
database='alerter',
charset='utf8'
)
insert_sql = "INSERT INTO `alerter`.`al_play_log` (`src_type`, `src_ip`, `led_style`, `led_color`, `led_flag`, `text`, `play_time`, `tone`) " \
"VALUES (4, '192.168.0.210', 15, '[\"#00FFFF\"]', '[0.05, 1]', 'content', 0, NULL)"
conn = pool.connection()
cursor = conn.cursor()
ret = cursor.execute(insert_sql)
conn.close()
print('new connection create,check log, then use mysql kill command to kill current connect.')
input()
conn = pool.connection()
cursor = conn.cursor()
ret = cursor.execute(insert_sql)
conn.close()
Execute the code according to the following steps
show processlist
cmd get current thread id, and use kill
cmd to kill it.This is the log of my database
2022-05-16T03:13:48.758866Z 207 Connect root@localhost on alerter using TCP/IP
2022-05-16T03:13:48.760342Z 207 Query SET AUTOCOMMIT = 0
2022-05-16T03:13:48.761637Z 207 Query SET AUTOCOMMIT = 1
2022-05-16T03:13:48.766279Z 208 Connect root@localhost on alerter using TCP/IP
2022-05-16T03:13:48.767477Z 208 Query SET AUTOCOMMIT = 0
2022-05-16T03:13:48.768686Z 208 Query SET AUTOCOMMIT = 1
2022-05-16T03:13:48.769913Z 208 Query ROLLBACK
2022-05-16T03:13:48.770811Z 207 Query ROLLBACK
2022-05-16T03:13:48.772879Z 208 Query INSERT INTO `alerter`.`al_play_log` (`src_type`, `src_ip`, `led_style`, `led_color`, `led_flag`, `text`, `play_time`, `tone`) VALUES (4, '192.168.0.210', 15, '["#00FFFF"]', '[0.05, 1]', 'content', 0, NULL)
2022-05-16T03:13:48.783426Z 208 Query ROLLBACK
2022-05-16T03:13:52.059120Z 192 Query show processlist
2022-05-16T03:13:59.223979Z 192 Query kill 207
2022-05-16T03:14:02.462124Z 192 Query kill 208
2022-05-16T03:14:13.903155Z 209 Connect root@localhost on alerter using TCP/IP
***2022-05-16T03:14:13.904506Z 209 Query SET AUTOCOMMIT = 0***
2022-05-16T03:14:13.906474Z 209 Query INSERT INTO `alerter`.`al_play_log` (`src_type`, `src_ip`, `led_style`, `led_color`, `led_flag`, `text`, `play_time`, `tone`) VALUES (4, '192.168.0.210', 15, '["#00FFFF"]', '[0.05, 1]', 'content', 0, NULL)
2022-05-16T03:14:13.908776Z 209 Query ROLLBACK
2022-05-16T03:14:13.956031Z 209 Quit
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.