Comments (9)
@Loquacity Actually, decompression is quite user-unfriendly as far as I was able to determine. You essentially need to decompress all chunks individually by name:
SELECT decompress_chunk('_timescaledb_internal._hyper_1_203_chunk');
This can theoretically be aggregated like this (derived from the compression example in the docs):
SELECT decompress_chunk(i) FROM show_chunks('mytable', older_than => INTERVAL ' 120 days') i;
This works well if you can assure that the interval (or condition) 1. includes ALL compressed chunks and 2. includes NO uncompressed chunks. If you are missing compressed chunks, you won't be able to remove the compress property. If you include uncompressed chunks, the process will fail and exit on the first one right in the middle of the process. This is even worse, because the result is sorted arbitrary, and you won't be able to rerun the command as it will fail on the first chunk in the set which is uncompressed now.
I essentially circumvented this annoying problem by printing chunk names from a bigger area to the console, so I make sure all compressed chunks are included, e.g.
SELECT i FROM show_chunks('ldt.machineconnectparameterlogs', older_than => INTERVAL ' 100 days') i;
This yields me a list (one per line) of chunks; it would even be possible to omit the filter in more complicated scenarios:
_timescaledb_internal._hyper_1_2_chunk
_timescaledb_internal._hyper_1_3_chunk
_timescaledb_internal._hyper_1_4_chunk
[...]
I then paste this list into Notepad++ and do a Multi-Column Edit (Alt-Key plus Selection) to construct a list of commands like in the initial example:
SELECT decompress_chunk('_timescaledb_internal._hyper_1_2_chunk');
SELECT decompress_chunk('_timescaledb_internal._hyper_1_3_chunk');
SELECT decompress_chunk('_timescaledb_internal._hyper_1_4_chunk');
[...]
Pasting this whole block back into the CLI will then execute each line individually. This will result in decompression for the compressed chunks and in an error for the uncompressed ones, which I don't care about; since each line is a separate command, execution will not be terminated by an error, though.
While this is a terribly ugly process, it's highly productive for getting the job done.
Eventually, when everything is decompressed, you will be able to run
ALTER TABLE mytable SET (timescaledb.compress=false);
successfully.
@Puciek Your error is different from mine. You seem to have some additional dependencies/complexity that I've not been faced with so far.
from docs.
I seem to have the same issue as @Puciek ; I ran the following: (1) remove compression policy (2) decompress all the compressed chunks (3) disable compression.
When I tried to disable the compression, I get the following error message:
psql> alter table <table_name> set (timescaledb.compress=false);
[2021-10-25 12:04:41] [2BP01] ERROR: cannot drop table _timescaledb_internal._compressed_hypertable_292 because other objects depend on it
[2021-10-25 12:04:41] Detail: table _timescaledb_internal.compress_hyper_292_7671_chunk depends on table _timescaledb_internal._compressed_hypertable_292
[2021-10-25 12:04:41] table _timescaledb_internal.compress_hyper_292_7673_chunk depends on table
<snip>
[2021-10-25 12:04:41] Hint: Use DROP ... CASCADE to drop the dependent objects too.
Any thoughts on what the issue might be?
from docs.
There's this in the docs now (compression.md line 403, from timescale/docs.timescale.com-content#525 ):
Next, pause the job with:
SELECT alter_job(<job_id>, scheduled => false);
Does that resolve this issue?
from docs.
It doesn't, those are different things. Specifically, this needs to be removed from the hypertable settings itself.
from docs.
Unfortunately, disabling compression really seems to be not covered at all in the docs ...?
The ALTER TABLE command above will fail if there are any compressed chunks.
from docs.
Unfortunately, disabling compression really seems to be not covered at all in the docs ...?
The ALTER TABLE command above will fail if there are any compressed chunks.
Happy to add it, how do you do it?
from docs.
I actually just tried that, as I need to change the table structure where renaming won't do, but when I try the command:
xxxxxx> ALTER TABLE data_source_tagentry SET (timescaledb.compress=false)
[2021-10-07 08:04:05] [2BP01] ERROR: cannot drop table _timescaledb_internal._compressed_hypertable_4 because other objects depend on it
[2021-10-07 08:04:05] Detail: table _timescaledb_internal.compress_hyper_4_367_chunk depends on table _timescaledb_internal._compressed_hypertable_4
[2021-10-07 08:04:05] table _timescaledb_internal.compress_hyper_4_379_chunk depends on table _timescaledb_internal._compressed_hypertable_4
[2021-10-07 08:04:05] Hint: Use DROP ... CASCADE to drop the dependent objects too.
So clearly missing something!
from docs.
@adschm Yea, I wound up just making a copy of the table and inserting data over from the compressed table, it worked and wasn't wonky, and also faster than just decompressing all those chunks. A bit weird!
from docs.
This can theoretically be aggregated like this (derived from the compression example in the docs):
SELECT decompress_chunk(i) FROM show_chunks('mytable', older_than => INTERVAL ' 120 days') i;
This works well if you can assure that the interval (or condition) 1. includes ALL compressed chunks and 2. includes NO uncompressed chunks. If you are missing compressed chunks, you won't be able to remove the compress property. If you include uncompressed chunks, the process will fail and exit on the first one right in the middle of the process. This is even worse, because the result is sorted arbitrary, and you won't be able to rerun the command as it will fail on the first chunk in the set which is uncompressed now.
Actually, decompress_chunk() appears to have a switch to reduce this to a warning:
https://docs.timescale.com/api/latest/compression/decompress_chunk/#sample-usage
from docs.
Related Issues (20)
- Failing link check (Thursday, May 9th)
- [Docs RFC] Document relationship between max_connections and replicas
- [Content Bug] HOT 1
- [Docs RFC] Unite and rewrite the write, query and ingest data sections HOT 1
- Failing link check (Wednesday, May 15th)
- CAGG documentation for add_continuous_aggregate_policy() needs some TLC
- [Content Bug] Remove 1.x versioned docs from site.
- [Docs RFC] Centralize troubleshooting docs into one section/page/
- Legacy Timescale documentation site inaccessible (Cloudflare 520) HOT 2
- [Site Bug] Issue with the page: /about/latest/release-notes/
- [Site Bug] Issue with the page: /about/latest/ HOT 1
- [Content Bug] Update VPC page
- [Docs RFC] Use tabs for the PSQL install pagge
- [Docs RFC] Use Tabs for the code get started, unite into one doc.
- [Docs RFC] Move each tutorial to a single page doc.
- [Content Bug] Update editions page.
- [Site Bug] Issue with the page: /self-hosted/latest/replication-and-ha/configure-replication
- [Site Bug] Issue with the page: /self-hosted/latest/tooling/install-toolkit/
- [Feedback] Page: /self-hosted/latest/upgrades/minor-upgrade/ HOT 1
- [Feedback] Page: /use-timescale/latest/integrations/data-ingest/telegraf/ HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from docs.