Comments (12)
maxmemory-clients 100%
means when all clients' memory reach 100% maxmemory, valkey will disconnect some clients, this config doesn't influence data eviction directly.
I have always wished that Redis/Valkey could track different types of memory usage, such as the amount of memory used for storing user data versus the amount of memory utilized for system operation (like clients). This way, users could set independent limits for data and system operation, such as maxmemory-data
and maxmemory-system
. If the memory for data exceeds maxmemory-data
, eviction would be triggered, if the memory for system operation goes beyond maxmemory-system
, like too many connections or excessive buffers for example, disconnection would be initiated. This would allow for clearer planning and management, rather than having memory usage by clients and similar operations lead to the eviction of data, as is currently the case.
About latency monitor, enable it is not a big deal.
/* Add the sample only if the elapsed time is >= to the configured threshold. */
#define latencyAddSampleIfNeeded(event, var) \
if (server.latency_monitor_threshold && (var) >= server.latency_monitor_threshold) latencyAddSample((event), (var));
It has almost no impact, because it's difficult to actually reach this threshold, so it's merely a conditional statement. Even if the threshold is truly exceeded, the time it consumes to log this is only at the microsecond level, which is far less than 100ms.
from valkey.
repl-diskless-load disabled
-> on-empty-db
. Any drawbacks?
from valkey.
maxmemory-clients 100%
means when all clients' memory reach 100% maxmemory, valkey will disconnect some clients, this config doesn't influence data eviction directly.I have always wished that Redis/Valkey could track different types of memory usage, such as the amount of memory used for storing user data versus the amount of memory utilized for system operation (like clients). This way, users could set independent limits for data and system operation, such as
maxmemory-data
andmaxmemory-system
. If the memory for data exceedsmaxmemory-data
, eviction would be triggered, if the memory for system operation goes beyondmaxmemory-system
, like too many connections or excessive buffers for example, disconnection would be initiated. This would allow for clearer planning and management, rather than having memory usage by clients and similar operations lead to the eviction of data, as is currently the case.About latency monitor, enable it is not a big deal.
/* Add the sample only if the elapsed time is >= to the configured threshold. */ #define latencyAddSampleIfNeeded(event, var) \ if (server.latency_monitor_threshold && (var) >= server.latency_monitor_threshold) latencyAddSample((event), (var));It has almost no impact, because it's difficult to actually reach this threshold, so it's merely a conditional statement. Even if the threshold is truly exceeded, the time it consumes to log this is only at the microsecond level, which is far less than 100ms.
@soloestoy I also agree that we need to keep some tight memory accounting for the user data. I wonder though if evictions (both clients and data) should only be triggered by that thresholds alone? for example how do we account for memory waste like fragmentation, it is very hard to evaluate the fragmentation ratio of each of the system/data memory and take it into account.
from valkey.
maxmemory-clients 100%
means when all clients' memory reach 100% maxmemory, valkey will disconnect some clients, this config doesn't influence data eviction directly.
I have always wished that Redis/Valkey could track different types of memory usage, such as the amount of memory used for storing user data versus the amount of memory utilized for system operation (like clients). This way, users could set independent limits for data and system operation, such asmaxmemory-data
andmaxmemory-system
. If the memory for data exceedsmaxmemory-data
, eviction would be triggered, if the memory for system operation goes beyondmaxmemory-system
, like too many connections or excessive buffers for example, disconnection would be initiated. This would allow for clearer planning and management, rather than having memory usage by clients and similar operations lead to the eviction of data, as is currently the case.
About latency monitor, enable it is not a big deal./* Add the sample only if the elapsed time is >= to the configured threshold. */ #define latencyAddSampleIfNeeded(event, var) \ if (server.latency_monitor_threshold && (var) >= server.latency_monitor_threshold) latencyAddSample((event), (var));It has almost no impact, because it's difficult to actually reach this threshold, so it's merely a conditional statement. Even if the threshold is truly exceeded, the time it consumes to log this is only at the microsecond level, which is far less than 100ms.
@soloestoy I also agree that we need to keep some tight memory accounting for the user data. I wonder though if evictions (both clients and data) should only be triggered by that thresholds alone? for example how do we account for memory waste like fragmentation, it is very hard to evaluate the fragmentation ration of each of the system/data memory and take it into account?
I agree about memory accounting needs especially for clients, that can consume resource independently of each other and as such you may want to have independent control.
I think that another aspect of memory usage is the transient memory allocations, that are allocated and freed during command execution (processing memory := memory for processing needs), like lua allocation/modules allocation/temp objs and so on, this memory usage can cause memory pressure/swap without being apparent in any data point except the peak memory that only works for all-time memory maximum and not recent.
[On this note, I hoped to raise one day the idea of memory pool isolation. where we use different memory pool for data for clients for processing memory. I have poc'd this idea using jemalloc's private arena. @ranshid, I think this will solve the fragmentation cost association to usage type. The motivation for memory pool is of course of much wider scope as the memory life cycle is very different, so isolating the usages will improve the overall efficiency. Maybe I'll raise a separate issue on this]
from valkey.
. @ranshid, I think this will solve the fragmentation cost association to usage type.
I agree, depending on the complexity of it :)
from valkey.
ohh, we already have the history limit:
/* The latency time series for a given event. */
struct latencyTimeSeries {
int idx; /* Index of the next sample to store. */
uint32_t max; /* Max latency observed for this event. */
struct latencySample samples[LATENCY_TS_LEN]; /* Latest history. */
};
#define LATENCY_TS_LEN 160 /* History length for every monitored event. */
from valkey.
some suggestions:
- maxmemory-clients
0 -> 100%
, limit client memory usage to avoid data eviction caused by network read/write buffering, or even machine system OOM. - latency-monitor-threshold
0 -> 100
, enable the latency monitoring by default (100ms), to avoid being unable to track after a problem occurs.
from valkey.
Interesting. @soloestoy can you explain what maxmemory-clients 100%
means? Clients and data share the maxmemory limit and are evicted at the same time? If Valkey is used as an LRU cache, maybe we want to evict data first?
Latency monitor, if it has almost no CPU overhead, I'm OK with enabling it by default. Please explain to me that I don't have to worry about this comment in the config file:
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load.
from valkey.
lets do it in 8.0, i think we should configure the ideal configuration items for users.
btw, i want to change latency-monitor-threshold to default 10ms, just like the slowlog's default value.
from valkey.
@enjoy-binbin what's the current default latency-monitor-threshold
? Is there a max size for the latency history? I'm thinking about the users who are not using this feature, that it doesn't eat a lot of memory or CPUs.
from valkey.
the current default latency-monitor-threshold is 0 (disable). I see you guys discussed this in here #653 (comment)
the memory one, yean, maybe, or a latency-history-max-len similar to slowlog-max-len? the latency i think is useful like slowlog, i think they can be put together as an analogy.
from valkey.
repl-diskless-load disabled -> on-empty-db. Any drawbacks?
Seems to have no drawbacks.
Speaking of repl-diskless-load, do we consider adding an empty-db option? it perform a flushall at first so that we can take on-empty-db
from valkey.
Related Issues (20)
- [Daily Test Failure] 12-master-reboot.tcl in test-sanitizer-address (clang)
- [Daily Test Failure] tests/unit/cluster/slot-migration.tcl in test-macos-latest
- Improve type safety of key embedding
- code usege of sds (and maybe other data types) HOT 1
- Add maxmemory-reserved-scale parameter to evict keys earlier HOT 2
- [NEW] WAIT ALL HOT 4
- [BUG] Example ACL for sentinel does not work HOT 1
- []Defrag level is calculated correctly HOT 1
- [NEW] Cluster support without special client bindings HOT 2
- [NEW] Opt-in for inclusive language (primary/replia in ROLE reply, CLUSTER SHARDS, etc.) HOT 4
- Regression from PR #445 Incorrectly Allows Slot Ownership Updates via Replica HOT 1
- Followup items from https://github.com/valkey-io/valkey/pull/758
- [NEW] Better branching strategy for Valkey HOT 10
- [Test Case Fail]External Server Tests HOT 3
- [BUG] nodes.conf can be corrupted when node is restarted before cluster shard ID stabilizes (for 7.2) HOT 8
- [BUG] CLUSTER SHARDS command returns "empty array" in slots section HOT 6
- [NEW] Can we add 'count' option to the 'RANDOMKEY' command? HOT 5
- [NEW] Nightly builds for Docker HOT 2
- Update rioconnwrite size from https://github.com/valkey-io/valkey/pull/60 HOT 5
- Missing Check for Sender's Config Epoch Before Accepting Primary Claim
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from valkey.