Git Product home page Git Product logo

ltss's People

Contributors

antoonhuiskens avatar benediktallendorf avatar charithmadhuranga avatar cm000n avatar dekiesel avatar dependabot[bot] avatar freol35241 avatar nwerter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ltss's Issues

cannot start (update schema) - duplicate entity

It seems that with schema/version update ltss startup fails and no data is recorded anymore. The latest row in ltss table is 2023-07-26 22:20:31.732 +0300 , so it may be related to not most recent update, but something released back in July.

Error:

Logger: custom_components.ltss
Source: custom_components/ltss/__init__.py:133
Integration: ltss ([documentation](https://github.com/freol35241/ltss))
First occurred: 11:05:58 AM (1 occurrences)
Last logged: 11:05:58 AM

Error during connection setup: (psycopg2.errors.UniqueViolation) could not create unique index "18_2428_ltss_pkey" DETAIL: Key ("time", entity_id)=(2022-04-01 11:13:10.023225+00, sun.sun) is duplicated. CONTEXT: SQL statement " ALTER TABLE _timescaledb_internal._hyper_1_18_chunk ADD CONSTRAINT "18_2428_ltss_pkey" PRIMARY KEY ("time", entity_id) " PL/pgSQL function _timescaledb_internal.chunk_constraint_add_table_constraint(_timescaledb_catalog.chunk_constraint) line 29 at EXECUTE [SQL: ALTER TABLE ltss DROP CONSTRAINT ltss_pkey CASCADE, ADD PRIMARY KEY(time,entity_id);] (Background on this error at: https://sqlalche.me/e/20/gkpj) (retrying in 3 seconds)

Investigate ability to add other long-term storage backends

It would be great if other LTS backends are considered: victoriametrics, m3db, cortex. These backends may provide the following improvements comparing to timescaledb:

  • easier to manage
  • better compression rate
  • better integration with Prometheus ecosystem due to native support for Prometheus querying API, which is used by Grafana for querying Prometheus-compatible datasource
  • support for PromQL, which is better suited for typical queries over time series data than SQL, Flux or InfluxQL

sqlalchemy problem: "Requirements for ltss not found"

Updating to Home Assistrant 2023.3.0 and ltss 2.0.1, I get this when trying to restart:

The system cannot restart because the configuration is not valid: Integration error: ltss - Requirements for ltss not found: ['sqlalchemy>=1.0,<2.0'].

Caused by, presumably,

Setup failed for custom integration ltss: Requirements for ltss not found: ['sqlalchemy>=1.0,<2.0'].

Which in turn comes from

2023-03-02 07:37:10.778 ERROR (SyncWorker_9) [homeassistant.util.package] Unable to install package sqlalchemy>=1.0,<2.0: ERROR: Cannot install sqlalchemy<2.0 and >=1.0 because these package versions have conflicting dependencies. ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

Capturing Switch States

Forgive me if this has already been covered, but alongside capturing sensor states, is it possible to capture switch states? I have the default install, which is capturing all sensors, I would like to also capture switches. Any pointers?

Enhancement: Use !secrets

Thank you for the addon, it works very well in my setup.

Would it be possible to use !secrets so that the password doesn't have to be saved in the config?

ltss broken since home assistant update to 2023.3.0 version

After updating my HA system, and after updating ltss component to version 2.0.0, I get immediately following error message inside HA Core log.:

Component error: ltss - cannot import name 'with_metaclass' from 'sqlalchemy.util' (/usr/local/lib/python3.10/site-packages/sqlalchemy/util/__init__.py)

Dependency Errors

Reporting two dependency errors I got when installing this from HACS: geoalchemy2 and pyscopg2.
psycopg2 looks to be OS-related (FreeBSD 12), but I'm not sure what caused the failure of geoalchemy2.

I can see the manifest.json in custom_components/ltss on my machine with geoalchemy2 listed.
I was able to install geoalchemy2 with pip in the homeassistant venv which removed this error.

After a restart of homeassistant, I got an error that psycopg2 is missing but this I think is specific to FreeBSD.
On FreeBSD 12, I had to manually install py37-psycopg2 (using pkg) and then do pip install psycopg2.
This works even though I'm using python 3.8.
After another restart, I see the ltss hypertable in the database with populated rows and no errors on startup.

geoalchemy2 error:

Traceback (most recent call last):
  File "/usr/local/share/homeassistant/lib/python3.8/site-packages/homeassistant/loader.py", line 541, in _load_file
    module = importlib.import_module(path)
  File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/homeassistant/homeassistant/custom_components/ltss/__init__.py", line 39, in <module>
    from .models import Base, LTSS
  File "/home/homeassistant/homeassistant/custom_components/ltss/models.py", line 16, in <module>
    from geoalchemy2 import Geometry
ModuleNotFoundError: No module named 'geoalchemy2'

System Info:

System Health

Home Assistant Core Integration

version: 0.118.4
installation_type: Home Assistant Core
dev: false
hassio: false
docker: false
virtualenv: true
python_version: 3.8.6
os_name: FreeBSD
os_version: 12.2-RC3
arch: amd64
timezone: America/Los_Angeles

Lovelace

dashboards: 1
mode: auto-gen
resources: 0

Numeric states and without attributes

Hello

First of all thanks for the Plugin, but there are two things that kinda bother me:

  1. It would be great to have an option to store states as double_precision instead of strings. This would first decrease the required disk space and make it easier in Grafana to plot charts. String states could be cast to numbers (eg. On/true = 1, Off = 0, unavailable just don't store or null, ...).
  2. Add an option to not store attributes. I always only need the state in the long term storage and attributes are quite big to store.

I will fork this repository and implement the two "features" for myself and create a pull request.

Thanks and best regards
Philip

ValueError: A string literal cannot contain NUL (0x00) characters.

The integration writes to db only for a few seconds after a HA/system reboot, then stops. On HA logs there is this error:

Logger: root
Source: custom_components/ltss/__init__.py:207
integration: Long Time State Storage (LTSS)
First occurred: 10:44:36 (1 occurrences)
Last logged: 10:44:36

Uncaught thread exception
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
    self.run()
  File "/config/custom_components/ltss/__init__.py", line 207, in run
    with session.begin():
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/util.py", line 146, in __exit__
    with util.safe_reraise():
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
    raise exc_value.with_traceback(exc_tb)
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/util.py", line 144, in __exit__
    self.commit()
  File "<string>", line 2, in commit
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/state_changes.py", line 139, in _go
    ret_value = fn(self, *arg, **kw)
                ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 1257, in commit
    self._prepare_impl()
  File "<string>", line 2, in _prepare_impl
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/state_changes.py", line 139, in _go
    ret_value = fn(self, *arg, **kw)
                ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 1232, in _prepare_impl
    self.session.flush()
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 4296, in flush
    self._flush(objects)
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 4431, in _flush
    with util.safe_reraise():
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
    raise exc_value.with_traceback(exc_tb)
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 4392, in _flush
    flush_context.execute()
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/unitofwork.py", line 466, in execute
    rec.execute(self)
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/unitofwork.py", line 642, in execute
    util.preloaded.orm_persistence.save_obj(
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/persistence.py", line 93, in save_obj
    _emit_insert_statements(
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/persistence.py", line 1048, in _emit_insert_statements
    result = connection.execute(
             ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1408, in execute
    return meth(
           ^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/sql/elements.py", line 513, in _execute_on_connection
    return connection._execute_clauseelement(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1630, in _execute_clauseelement
    ret = self._execute_context(
          ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1839, in _execute_context
    return self._exec_single_context(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1979, in _exec_single_context
    self._handle_dbapi_exception(
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2338, in _handle_dbapi_exception
    raise exc_info[1].with_traceback(exc_info[2])
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1960, in _exec_single_context
    self.dialect.do_execute(
  File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute
    cursor.execute(statement, parameters)
ValueError: A string literal cannot contain NUL (0x00) characters.

Feature request: Configurable chunk interval

From https://community.home-assistant.io/t/custom-component-long-time-state-storage-ltss-utilizing-timescaledb/155047/30?u=freol:

According to best practice, a chunk should not be larger than 25% of the available RAM. Since I run TimescaleDB on a Pi4 2GB, I have allocated it a maximum of 512MB. Of that 25%, 128MB would be the maximum a chunk can have. But since I generate 150MB per day (>300MB for sure in the future), I set the interval to 12h ( SELECT set_chunk_time_interval(‘ltss’, 432000000); ).

However, this only affects the next chunk. The current one with 30 days is still running. The only way to shrink an existing chunk is, in my opinion, to export the database, recreate the database (with chunk interval set to 12h) and import the data again.

But how can I tell LTTS to create the database with 12h interval?

Update:
For me locally I have adapted the init.py.
Line 311:
from “chunk_time_interval => interval ‘1 month’,”
to “chunk_time_interval => interval ‘12 hours’,”.

Previously disabled ltss in configuration.yaml, then restart HA, then deleted and recreated database, then re-enabled ltss (after code change), then restart HA.

Worked great (also the previous backup via csv and afterwards import). But maybe it’s an idea to set the interval as a configurable value? I think 30 days might be too much for many.

ltss not working since home assistant 2023.3.0b0 release

Hi,

ltss is not working anymore since 2023.3.0b0 release. It's working fine for 2023.2.x. See below the error.
With this release, SQL alchemy is bumped to version 2.

Logger: homeassistant.util.package
Source: util/package.py:107
First occurred: 23:14:17 (3 occurrences)
Last logged: 23:14:31

Unable to install package sqlalchemy>=1.0,<2.0: ERROR: Cannot install sqlalchemy<2.0 and >=1.0 because these package versions have conflicting dependencies. ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts [notice] A new release of pip is available: 23.0 -> 23.0.1 [notice] To update, run: pip install --upgrade pip

Logger: homeassistant.setup
Source: setup.py:205
First occurred: 23:14:52 (1 occurrences)
Last logged: 23:14:52

Setup failed for custom integration ltss: Requirements for ltss not found: ['sqlalchemy>=1.0,<2.0']

SQL alchemy error in home assistant v. 2023.2.3

2023-03-07 15:17:18.358 WARNING (SyncWorker_1) [homeassistant.loader] We found a custom integration ltss which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant
2023-03-07 15:17:55.334 ERROR (MainThread) [homeassistant.setup] Setup failed for custom integration ltss: Requirements for ltss not found: ['sqlalchemy>=2.0,<3.0'].

is PostGIS extension really needed?

Hi freol,
I've found this HA component from its forum, thank you for this awesome job.

Now, is PostGIS really needed for your component ?
Does it work with only timescaledb?

I'm running HA on raspberry with docker and seems that there is no docker image for timescale+postgis, while there is a working docker image for timescaledb alone.

thanks.

operation not supported on hypertables that have compression enabled

Homeassistant 2023.06.01
LTSS 2.1.0

Error at startup

2023-06-13 14:01:59.553 WARNING (LTSS) [custom_components.ltss.migrations] Migrating you LTSS table to the latest schema, this might take a couple of minutes!
2023-06-13 14:01:59.559 ERROR (LTSS) [custom_components.ltss] Error during connection setup: (psycopg2.errors.FeatureNotSupported) operation not supported on hypertables that have compression enabled

[SQL: ALTER TABLE ltss
                    DROP CONSTRAINT ltss_pkey CASCADE,
                    ADD PRIMARY KEY(time,entity_id);]
(Background on this error at: https://sqlalche.me/e/20/tw8g) (retrying in 3 seconds)

Feature Request: commit_interval

First of all, thanks for this awesome addon!

Secondly, I'd like to have a separate commit_interval setting for LTSS. I currently have commit_interval: 2 for the normal recorder (which if I don't misinterpret) commits the rows to database every 2 seconds. I wouldn't need that frequent updates for LTSS, I could do with once per 30-60 seconds or even less frequently, like once per 5 minutes or so.

As far as I know, I'm not able to setup like this at the moment, therefore I'm writing this feature request. In case this is already supported, please let me know 🙂

Feature Request: include/exclude `entity_globs` as in the `recorder` component of homeassistant

Hey there,

I recently had the need of using entity_globs as I've added an entity to have the amount of seconds a switch is in its position.

To keep homeassistant and ltss in sync, I wanted to apply the same settings to this component aswell.
Sadly, it does not support it (yet).

I am willing to contribute but I dont have time for that for the moment.
If you are planning to work on it, please let me know.
If others want to contribute, please do so.

I'll just create this feature request to keep track on this.

Thanks!

Convert attributes column to jsonb

I think it can be beneficial for both speed and space to implement the 'attributes' column as JSONB datatype instead of text.
This way, you can also index it by:

create index idxAttributes on ltss USING gin (attributes)

And from here create effecient indexed queries like:

select * from ltss where attributes @> '{
    "icon": "mdi:delete-empty"
}'

Unicode translation to SQL_ASCII encoding fails with tempature, winddirection and cubic-meter-gas unit_of_measurement

Hi,

First of all I am very happy to with LTSS.
Using a PostgreSQL database instead of InfluxDB, to be able to hook up other data visualization tools (like Tableau), and maintain possibility for Grafana (examples by the HA community).

It was easy to setup, with a LXC container with Proxmox Helper script.
Most difficult part was to find and select the domains and/or entitities to include or exclude.

Running for a couple of hours now, i decided to check the logs.
It is working pretty well and data is moved to PostgreSQL/TimeScale.

But it fails with sensors with special characters to encode from Unicode to SQL_ASCII.
Error Message:

DETAIL:  Unicode escape value could not be translated to the server's encoding SQL_ASCII.
CONTEXT:  JSON data, line 1: ..., "voltage": 3025, "unit_of_measurement": "\u00b0...

Where unit_of_measument can be '°C' (degrees Celsius), winddirection '°' or cubic-meter gas in 'm³'.
I also drive a Skoda for which i use the skoda_connect integration to get data into HA.
And the name is officially: Škoda, with that 'strange' S.
The are quite a lot devices with a temperature entity, which i would not like to miss in LTSS.

Is there a solution for this?

constraint "ltss_pkey" of relation "ltss" does not exist causes migration to fail to v.2.1.0

Ok. Just had another go at upgrading:
I'm getting this:

Error during connection setup: (psycopg2.errors.UndefinedObject) constraint "ltss_pkey" of relation "ltss" does not exist [SQL: ALTER TABLE ltss DROP CONSTRAINT ltss_pkey CASCADE, ADD PRIMARY KEY(time,entity_id);] (Background on this error at: https://sqlalche.me/e/20/f405) (retrying in 3 seconds)

Checking on compression:

# SELECT hypertable_name,chunk_name,is_compressed FROM timescaledb_information.chunks LIMIT 1;
 hypertable_name |    chunk_name    | is_compressed
-----------------+------------------+---------------
 ltss            | _hyper_1_1_chunk | f

I'm not sure why the ltss_constraint doesn't exist.

I'd say that adding IF EXISTS to the ALTER TABLE statement would fix this.
(1 row)

Doesn't look to be related to role permissions either:

# \z ltss
                                 Access privileges
 Schema | Name | Type  |      Access privileges      | Column privileges | Policies
--------+------+-------+-----------------------------+-------------------+----------
 public | ltss | table | ltssadmin=arwdDxt/ltssadmin+|                   |
        |      |       | grafanareader=r/ltssadmin   |                   |

Lot of "Error handling message: Unknown error" Messages since HA 0.109

Since HA version 0.109 I’m getting a lot of error messages which are probably caused by ltss:

Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/components/websocket_api/decorators.py", line 20, in _handle_async_response await func(hass, connection, msg) File "/usr/src/homeassistant/homeassistant/components/frontend/__init__.py", line 550, in websocket_get_translations msg.get("config_flow"), File "/usr/src/homeassistant/homeassistant/helpers/translation.py", line 329, in async_get_translations resources = flatten(resource_func(results[0], components, category)) File "/usr/src/homeassistant/homeassistant/helpers/translation.py", line 127, in merge_resources new_value = translation_strings[component].get(category) KeyError: 'ltss'

Seems the error message is related to the current changes how translations are handled now in HA 0.109.

Create index on (entity_id, time)

Currently the TimescaleDB extension creates two indexes, on on entity_id and one on time. However, most queries will filter by both entity and time and therefore a compound index on both (entity_id, time DESC) will improve those queries considerably. The TimescaleDB manual also suggests creating such indexes.

Using Attributes as JSON in Postgresql seems not to work

I was trying to use the attributes field within a PostgreSQL query, eg.:

SELECT time, state, attributes, attributes::jsonb->'current_temperature' as CT
    FROM ltss
    WHERE entity_id = 'climate.hz_buro';

But it seems, that the attributes field is not filled with json object. It just seems to contain a string. This results in that the PostgreSQL json query is emtpy (ct column).

             time              | state |                                                                                                                                                                                                                     attributes                                                                                                                                                                                                                      |  ct  
-------------------------------+-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------
 2020-10-23 23:27:13.714493+02 | auto  | "{\"hvac_modes\": [\"auto\", \"heat\", \"off\"], \"min_temp\": 4.5, \"max_temp\": 30.5, \"target_temp_step\": 0.5, \"preset_modes\": [\"boost\", \"comfort\", \"eco\"], \"current_temperature\": 22.5, \"temperature\": 21.5, \"preset_mode\": null, \"battery\": 2.6, \"rssi_peer\": 0, \"valve\": 25, \"mode\": \"Auto\", \"id\": \"XXXXXXX\", \"interface\": \"wireless\", \"friendly_name\": \"HZ B\\u00fcro\", \"supported_features\": 17}" | 
 2020-10-23 23:24:43.206075+02 | auto  | "{\"hvac_modes\": [\"auto\", \"heat\", \"off\"], \"min_temp\": 4.5, \"max_temp\": 30.5, \"target_temp_step\": 0.5, \"preset_modes\": [\"boost\", \"comfort\", \"eco\"], \"current_temperature\": 22.5, \"temperature\": 21.5, \"preset_mode\": null, \"battery\": 2.6, \"rssi_peer\": 0, \"valve\": 34, \"mode\": \"Auto\", \"id\": \"XXXXXXX\", \"interface\": \"wireless\", \"friendly_name\": \"HZ B\\u00fcro\", \"supported_features\": 17}" | 
 2020-10-23 23:14:24.705269+02 | auto  | "{\"hvac_modes\": [\"auto\", \"heat\", \"off\"], \"min_temp\": 4.5, \"max_temp\": 30.5, \"target_temp_step\": 0.5, \"preset_modes\": [\"boost\", \"comfort\", \"eco\"], \"current_temperature\": 22.4, \"temperature\": 21.5, \"preset_mode\": null, \"battery\": 2.6, \"rssi_peer\": 0, \"valve\": 34, \"mode\": \"Auto\", \"id\": \"XXXXXXX\", \"interface\": \"wireless\", \"friendly_name\": \"HZ B\\u00fcro\", \"supported_features\": 17}" | 
 2020-10-23 23:08:52.966288+02 | auto  | "{\"hvac_modes\": [\"auto\", \"heat\", \"off\"], \"min_temp\": 4.5, \"max_temp\": 30.5, \"target_temp_step\": 0.5, \"preset_modes\": [\"boost\", \"comfort\", \"eco\"], \"current_temperature\": 22.3, \"temperature\": 21.5, \"preset_mode\": null, \"battery\": 2.6, \"rssi_peer\": 0, \"valve\": 34, \"mode\": \"Auto\", \"id\": \"XXXXXXX\", \"interface\": \"wireless\", \"friendly_name\": \"HZ B\\u00fcro\", \"supported_features\": 17}" | 
 2020-10-23 22:53:21.929532+02 | auto  | "{\"hvac_modes\": [\"auto\", \"heat\", \"off\"], \"min_temp\": 4.5, \"max_temp\": 30.5, \"target_temp_step\": 0.5, \"preset_modes\": [\"boost\", \"comfort\", \"eco\"], \"current_temperature\": 22.2, \"temperature\": 21.5, \"preset_mode\": null, \"battery\": 2.6, \"rssi_peer\": 0, \"valve\": 34, \"mode\": \"Auto\", \"id\": \"XXXXXXX\", \"interface\": \"wireless\", \"friendly_name\": \"HZ B\\u00fcro\", \"supported_features\": 17}" |

Does anyone else have this observation?

Please add additional examples

It would be nice if you had some examples of using ltss stats in Grafana and perhaps in Lovelace using a graphing card... just might help illustrate the value . Love the project though !

DB might end up in an undefined state since there is no rollback on errors in the migration code

A more general approach would likely be to wrap the entire set of upgrade changes to the db in a a BEGIN/COMMIT block, with a ROLLBACK in case of errors, so a robust way would be to autocommit=False for all sql, plus some code to handle the commit. Out of scope for this PR though.

That should (in theory) prevent the db from getting into an undefined state.

Originally posted by @antoonhuiskens in #112 (comment)

psycopg2-binary cannot be installed on Raspi

When attempting to install ltss into Home Assistant Container on a Raspberry Pi 4, I get the following error message:

2023-09-02 22:09:41.013 ERROR (SyncWorker_1) [homeassistant.util.package] Unable to install package psycopg2-binary>=2.8,<3.0: error: subprocess-exited-with-error
  
  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [23 lines of output]
      running egg_info
      creating /tmp/pip-pip-egg-info-24ok9pce/psycopg2_binary.egg-info
      writing /tmp/pip-pip-egg-info-24ok9pce/psycopg2_binary.egg-info/PKG-INFO
      writing dependency_links to /tmp/pip-pip-egg-info-24ok9pce/psycopg2_binary.egg-info/dependency_links.txt
      writing top-level names to /tmp/pip-pip-egg-info-24ok9pce/psycopg2_binary.egg-info/top_level.txt
      writing manifest file '/tmp/pip-pip-egg-info-24ok9pce/psycopg2_binary.egg-info/SOURCES.txt'
      
      Error: pg_config executable not found.
      
      pg_config is required to build psycopg2 from source.  Please add the directory
      containing pg_config to the $PATH or specify the full executable path with the
      option:
      
          python setup.py build_ext --pg-config /path/to/pg_config build ...
      
      or with the pg_config option in 'setup.cfg'.
      
      If you prefer to avoid building psycopg2 from source, please install the PyPI
      'psycopg2-binary' package instead.
      
      For further information please check the 'doc/src/install.rst' file (also at
      <https://www.psycopg.org/docs/install.html>).
      
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

Obviously, ltss won't work afterwards.

Feature: write all states, changed or not, regularly

First off: thanks for making this, I much prefer timescale to influx.

Currently, querying for rarely changing states in Grafana can be very painful, requiring giant queries such as: https://community.home-assistant.io/t/migrating-from-influxdb-to-timescaledb-on-a-raspberry-pi/235304/3

For setups where database size isn't really limited, it would be great to have an option to write all states every x minutes to a separate table. This table could exclude things like the JSONB column to save on a bit of data.

This would make Grafana visualizations much easier.

Replace recorder component

Hey, really like the idea of using Postgre for storing the sensor data!

This component is not to be considered as a replacement to the recorder component in Home Assistant but rather as an alternative to the InfluxDB component for more space-efficient long time storage of specific sensor states.

Is there a chance to extend the database to extend the recorder component completely in the future? I don't want two databases running just to store the same data and I'm sure Postgre is much faster with a lot of data compared to sqlite or MySQL.

Event times are being recorded as UTC, regardless of my actual timezone

Hi! I'm using Home Assistant on a Raspberry Pi, running it in Docker via Docker Compose.

I just noticed that timestamps that are being recorded by ltss in the TimescaleDB database are in UTC without the timezone indication.

Timezone on host and in all the containers, including Home Assistant and TimescaleDB is right (+4 in my case), timezone returned by both command line and Python within Home Assistant container is also right. Time within Home Assistant, both in settings, logbook, and clock is right as well.

Here's the TimescaleDB ltss table contents though (sensor.time is being recorded so you could clearly see the difference)

homeassistant=# select * from ltss order by time desc limit 10;
 id  |             time              |    entity_id     |       state       |                           attributes                           
-----+-------------------------------+------------------+-------------------+----------------------------------------------------------------
 107 | 2022-08-11 21:19:00.004729+00 | sensor.date_time | 2022-08-12, 01:19 | {"icon": "mdi:calendar-clock", "friendly_name": "Date & Time"}
 106 | 2022-08-11 21:19:00.002517+00 | sensor.time      | 01:19             | {"icon": "mdi:clock", "friendly_name": "Time"}
 105 | 2022-08-11 21:18:00.004057+00 | sensor.time      | 01:18             | {"icon": "mdi:clock", "friendly_name": "Time"}
 104 | 2022-08-11 21:18:00.002282+00 | sensor.date_time | 2022-08-12, 01:18 | {"icon": "mdi:calendar-clock", "friendly_name": "Date & Time"}
 103 | 2022-08-11 21:17:00.00259+00  | sensor.time      | 01:17             | {"icon": "mdi:clock", "friendly_name": "Time"}
 102 | 2022-08-11 21:17:00.001513+00 | sensor.date_time | 2022-08-12, 01:17 | {"icon": "mdi:calendar-clock", "friendly_name": "Date & Time"}
 101 | 2022-08-11 21:16:00.00503+00  | sensor.time      | 01:16             | {"icon": "mdi:clock", "friendly_name": "Time"}
 100 | 2022-08-11 21:16:00.002935+00 | sensor.date_time | 2022-08-12, 01:16 | {"icon": "mdi:calendar-clock", "friendly_name": "Date & Time"}
  99 | 2022-08-11 21:15:00.002753+00 | sensor.date_time | 2022-08-12, 01:15 | {"icon": "mdi:calendar-clock", "friendly_name": "Date & Time"}
  98 | 2022-08-11 21:15:00.001726+00 | sensor.time      | 01:15             | {"icon": "mdi:clock", "friendly_name": "Time"}
(10 rows)

As far as I understood by reading the ltss code, it's just putting the event.time_fired right into the database, but after looking at the home-assistant_v2.db I noticed that the time_fired is also being recorded in UTC with no timezone indication within Home Assistant.

sqlite> select * from events order by time_fired desc limit 10;
4791|homeassistant_started|||0|2022-08-11 20:57:08.330869|01GA7B4NFAC4RB0SYCE2SHXB6H|||
4790|core_config_updated|||0|2022-08-11 20:57:08.330072|01GA7B4NFATEME28A87W1G584E|||
4789|homeassistant_start|||0|2022-08-11 20:57:08.300621|01GA7B4NECNH4KA8E7AFBN4YRT|||
4788|core_config_updated|||0|2022-08-11 20:57:08.300143|01GA7B4NECTS6BZ5FSHSMVZKJR|||
4787|component_loaded|||0|2022-08-11 20:57:08.279795|01GA7B4NDQMVTN9MF56AK527M4|||113
4786|panels_updated|||0|2022-08-11 20:57:08.265364|01GA7B4ND90QXQRBX8GBDRG765|||
4785|component_loaded|||0|2022-08-11 20:57:07.886565|01GA7B4N1EN0K65T3112JD2YA8|||97
4784|component_loaded|||0|2022-08-11 20:57:07.882754|01GA7B4N1AY3Q4GBAESXS9CW8G|||96
4783|component_loaded|||0|2022-08-11 20:57:07.855351|01GA7B4N0F6GZGPEH97T3VG3SY|||108
4782|component_loaded|||0|2022-08-11 20:57:07.814306|01GA7B4MZ68P395WNZJ8S01E0R|||104

So I'm not sure whether HA gives the time_fired to ltss in this format or not, HA code is a bit too hard for me to read.

To be honest, I have no idea whether it's an expected behaviour, It's probably not a bug in ltss since I see that it's not doing anything with the timestamp it gets, but I'm just trying to figure out whether it's an expected behaviour for HA to give UTC time to ltss (which it probably does, I guess?), and whether I should open an issue with HA, or this should/could be mitigated on the ltss side.

psycopg2 error in home assistant v.2023.3.4

Integration error: ltss - Requirements for ltss not found: ['psycopg2>=2.8,<3.0'].

Seems that is a not-required requirement. Could be?
The component seems to work normally without it.

Add init commands to execute as postgres user (admin) to docu

At the moment you write in the README:

(...) During the initial startup of the component, the extensions will be created on the specified database. This requires superuser priviligies on the PostgreSQL instance (...) Once the extensions are created, a user without superuser rights can be used!

Executing commands on startup as super db admin user postgres and then switch to user B may lead to permission problems and is not really a good idea. For example user A could create a table which user B can't access afterwards.

I think there are only two commands to execute manually before editing the HA configuration.yaml file:

CREATE EXTENSION IF NOT EXISTS postgis CASCADE;
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;

I would include these statements in the README and maybe let the user by default execute them and only check for their existence inside your startup scripts.

Add option to skip attributes?

To reduce the size of the database I would like to skip the attributes of each entity because I'm only interested in the state. Is this possible, or could this be a configuration option?

[FR] Please add support for HACS

And as a Feature Request: Could you maybe also please add HACS support to ltss?
I just think it would help ltss to be known to a wider audience and make it easier to track features and fixes.
Seems you’ll have to restructure your repo a bit for that:
https://github.com/hacs/default

Thanks in advance

ltss won't load after update to ha-core 2023.3.2

I'm running supervised on KVM, and timeseriesdb in a container outside of that VM, all of which has been running well for months.

I just updated from ha core 2023.2.something to 2023.3.2, using the web-ui.

I got errors at that point about not being able to satisfy the requirement for sqlalchemy >=1 and <2, so I updated ltss then from v1.x.x to the current release 2.0.1.

After rebooting the full VM, I get a notification that the ltss config is invalid:
image

And the logs show this error:

Logger: homeassistant.setup
Source: setup.py:213
First occurred: 23:19:59 (1 occurrences)
Last logged: 23:19:59

Setup failed for custom integration ltss: Unable to import component: cannot import name 'with_metaclass' from 'sqlalchemy.util' (/usr/local/lib/python3.10/site-packages/sqlalchemy/util/__init__.py)

I tried (through the hacs gui) deleting and then downloading the ltss package again, but with the same result.

The docker container that supervisor is running for homeassistant is 74baa42900dd ghcr.io/home-assistant/qemux86-64-homeassistant:2023.3.2 and running python --version in it reports Python 3.10.10.

If I also run pip show sqlalchemy in that same container, I get:

# pip show sqlalchemy
Name: SQLAlchemy
Version: 2.0.5.post1
Summary: Database Abstraction Library
Home-page: https://www.sqlalchemy.org
Author: Mike Bayer
Author-email: [email protected]
License: MIT
Location: /usr/local/lib/python3.10/site-packages
Requires: greenlet, typing-extensions
Required-by: GeoAlchemy2, pygtfs

My configuration.yaml contains:

# AJG 2022-07-04 Now using PostgreSQL and TimescaleDB for history etc
recorder:                                                            
  db_url: !secret postgres_url                                       
ltss:                                                                
  db_url: !secret postgres_url                                       
  chunk_time_interval: 2592000000000                                 

(the chunk_time_interval was added today after the update to 2.0.1 and reading the new readme).

I couldn't see any issues searching the sqlalchemy tracker for with_metaclass, so I'm not really sure where to go from here. Any pointers to help work out what's happening?

More reliable DB reconnection logic

When the TimescaleDB is unavailable while ltss is running / being started, it's retrying to reconnect 10 times with 3s delay, as seen here: https://github.com/freol35241/ltss/blob/master/custom_components/ltss/__init__.py#L125

After 30s it gives up and won't start writing data even if the DB becomes healthy in the meantime. This means that I have to babysit LTSS when I'm doing some DB maintenance and if I don't notice that it's down it leaves me with big gaps in the data.

It would be better if LTSS was retrying to reconnect indefinitely (while logging the problem if it cannot connect), or at least if the number of retries was configurable (all the way to infinity).

I can help with the PR, just let me know if you are interested in this kind of change.

Thank you for your hard work.

Having trubble finding my data from grafana and pgadmin

Hey, first of all thanks for this intergration, its super awsome for storing lots of data :)

I have used LTSS for a good part of a year, and now i finaly wanna see the logged data, but i have trubble finding it.
if i do a

SELECT
  entity_id
FROM
  ltss
LIMIT
  500

Then i only get some automation.xxx and i assume thats where every single sensor and so on shuld have been?
The database is arround 8gb, so there shuld be something there :P

Outdated goalchemy2 version?

Hello

I'm quite new to Home Assistant and was using InfluxDB since now for my long term storage. But I dislike the "new" version of Influx 2.x with the Flux part. Also it would be great to just have one database running, which can store the normal home assistant data in Postgres and the long term sensor data in Timescale.

Now onto my issue :-)
If I try to use the LTSS extension, I get the following error: Setup failed for custom integration ltss: Requirements for ltss not found: ['geoalchemy2>=0.8,<0.9'].

I checked the goalchemy2 version in the Home Assistant repository and it's sqlalchemy==1.4.45. link

I never used sqlalchemy, but are there huge breaking changes between the two versions? Do you think we/you could just upgrade to ´>=1.4.45´?

I also have no idea how to develop an integration for Home Assistant. But it seems that's not possible to have an own "venv" for each integration?

edit: I just digged a bit deeper and it seems that requirements should be installed by default. The more detailed logs show, that the c compiler gcc ist not installed in the docker container which is running home assistant. Therefor pip fails installing sqlalchemy! I will try to fit it tomorrow and close this issue if this solved it.

Reduce LTSS database size

I am not an expert with LTSS and looking through the PRs and issues - it looks like there have been many attemps but I am wondering if there is any easy and practical solution to reduce the size of the LTSS database. My database is 17Gb and this doesn't feel very efficient. I did experiment in the past with other timeseries databases and out of the box the storage was more optimized.

Any recommendations to compact the size of the ltss table?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.