Comments (7)
https://github.com/go-sql-driver/mysql/issues/1125
https://github.com/go-sql-driver/mysql/issues/278
I have read these issue above, and I found that this issue is opened from 2013 to now 2023, but now go-sql-driver still not support compressed mode.
I wish go-sql-driver could support compressed mode, because it could improve the performance much when reading many rows(100k and more) from mysql.
In my project, the program will read many rows from mysql when program starts, and the program often execute large sql(600MB+) in a transaction through go-sql-driver
But now it seems that go-sql-driver will never support compressed mode.
from mysql.
In sysown/proxysql#4204 is being discussed the possibility of using ProxySQL as a middle layer to enable compression:
- go-sql-driver connects to proxysql without compression
- proxysql connects to the backend (MySQL or otherwise) using compression (currently only zlib supported)
from mysql.
If somebody want's to work on this, here's the link to the MySQL spec.
from mysql.
I see this issue closed and the comment that it wasn't quicker and all connections in the pool would be affected. While I understand that there are 2 cases where I have seen the use of the compressed protocol to be beneficial:
(1) when the number of client connections to the server comes from a large number of clients and the datasets returned are very large. In this situation the network interface of the destination MySQL server can be saturated: using compression avoids that and thus increases overall throughput. Here in a specialised use case it was beneficial.
(2) when pulling a lot of data over a slow high latency (remote) connection the gain of using compression can be noticeable.
While both of these use cases may not be frequent I've seen them both benefit from using the compression option so it would seem useful to provide this to the go client if that's needed.
So allowing this option would be most useful.
from mysql.
@sjmudd this issue is not closed. The referenced one is (278).
But adding compression is definitely not a priority for us maintainers.
Just out of curiosity so I can better see where you're coming from:
What kind of application uses a MySQL database server accessed over a high latency network from a huge number of clients?
I'm coming from a world where the database server is next to never the api endpoint - and that's also a world where frontend and backend servers are connected with a robust and fast connection.
from mysql.
Sorry they are 2 different use cases. I recently built something (for me to learn go) called pstop: see github.com/sjmudd/pstop. Connecting from home via a vpn to the company servers to see example statistics showed this to be quite slow (performance_schema data can be quite large and my link was due to tunneling etc) quite slow. Here I think the reduction in the data sent to the client would speed up the end resulting query times. This is one example.
The other example is on a normal network: a cluster of dedicated processing clients collecting information from a cluster of db servers are very aggressive in the data they pull out and the result sets are very large (a lot of processing happens on the client to off-load the database servers some of the work). Here it has been observed that the 1Gb network link was saturated sometimes and this thus triggered tcp back off and thus extra delays. Enabling the compression algorithm on this cluster of boxes allowed the network bandwidth to drop (yes there was likely to be a small extra overhead in using compression) but the gain as the network link was not saturated was definitely worth it. For this cluster in particular the application explicitly allows compression and benefits from it (as a whole).
I guess I was hoping the option would be there. There is no explicit comment in the documentation I am aware of that this is not available so I had to hunt around and finally found this issue which explains this has not been implemented yet. I understand your reasoning, but given I have seen a real use case where compression was helpful, thought it might be worthwhile to mention it.
I hope that clarifies my previous post?
from mysql.
Also upvoting this as we've discovered with using Aurora DB, our IO costs are starting to be >50% of the overall cluster cost. So if we can reduce the amount of data transmitted/received to/from Aurora we'll have substantial cost savings.
This is mainly an issue as we're saving TEXT columns on table rows totaling more than 5kB on each row.
from mysql.
Related Issues (20)
- Cut a new v1.7.2 release HOT 14
- Potential out of bound access in `(*mysqlConn).handleErrorPacket` HOT 7
- Toggle Click in Leaderboards
- mysql startWatcher panic error HOT 3
- Bizarre hanging issue in rows.Close() HOT 7
- Can't Install, bug in edwards25519/scalar.go HOT 1
- Likely race that can prevent to recycle closed connections HOT 5
- Return timeout error instead of Invalid connection err when read packet network timeout HOT 5
- 保存时报错:Prepared statement contains too many placeholders
- too may ram usage when lost connection HOT 13
- SEGV writing packet HOT 1
- Connection liveness/goodeness check and AWS Aurora HOT 1
- Data race between mysqlConn watcher and okHandler during context cancellation HOT 3
- v1.6.0 get error
- Data race between mysqlConn.cleanup() and writeHandshakeResponsePacket HOT 2
- MaxOpenConns is not working
- Receiving "(using password: NO)" when password was given HOT 2
- Why must use port 3306?
- Scanner should return one of int64,float64,bool,[]byte,string,time.Time,nil , but I got uint64 HOT 3
- Do I need logic backup mysql data using mysqldump while I have master-slave mysql cluster.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mysql.