Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uptime Kuma entities not available in HA #4502

Closed
1 task done
xelemorf opened this issue Feb 18, 2024 · 3 comments
Closed
1 task done

Uptime Kuma entities not available in HA #4502

xelemorf opened this issue Feb 18, 2024 · 3 comments
Labels
area:core issues describing changes to the core of uptime kuma help

Comments

@xelemorf
Copy link

📑 I have found these related issues/pull requests

Related to hassio-addons/addon-uptime-kuma#156

🛡️ Security Policy

Description

UptimeKuma sensor entities dissapeared in HA, showing "Error in describing condition: can't access property "entity_id" of undefined"; manual check for entity availability shows no custom UptimeKuma monitors are populated into HA.

👟 Reproduction steps

Let UptimeKuma run for over 90 days with 28 monitor entities, wait for the database to grow over 2GB (2397.6 GB) While settings are configured to keep only 1 day history.

👀 Expected behavior

Custom UptimeKuma monitors should be auto-populated after addon-start

😓 Actual Behavior

UptimeKuma sensor entities dissapeared in HA, only the below are available, none of the custom monitors are available

  • Uptime Kuma Running
    
  • uptimekuma ha local
    

🐻 Uptime-Kuma Version

0.12.0 (HA Addon)

💻 Operating System and Arch

HAOS 11.5 (OVA)

🌐 Browser

Firefox Nightly

🖥️ Deployment Environment

  • Runtime: HAOS
  • Database: sqlite/embedded
  • Filesystem used to store the database on: HAOS
  • number of monitors: 28

📝 Relevant log output

--- 192.168.69.29 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2081ms
pipe 3
 | Interval: 20 seconds | Type: ping | Down Count: 0 | Resend Interval: 0
2024-02-18T13:22:25+01:00 [MANAGE] INFO: Clear Statistics User ID: 1
2024-02-18T13:24:20+01:00 [MONITOR] WARN: Monitor #4 'livingroom-ac': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 20 seconds | Type: ping | Down Count: 0 | Resend Interval: 0
2024-02-18T13:24:22+01:00 [MONITOR] WARN: Monitor #9 'mobile-rencsi-a71': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:22+01:00 [MONITOR] WARN: Monitor #5 'livingroom-cam1': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:24+01:00 [MONITOR] WARN: Monitor #6 'livingroom-cam2': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:24+01:00 [MONITOR] WARN: Monitor #7 'livingroom-tv-wifi': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 20 seconds | Type: ping | Down Count: 0 | Resend Interval: 0
2024-02-18T13:24:24+01:00 [MONITOR] WARN: Monitor #8 'lobby-cam': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:26+01:00 [MONITOR] WARN: Monitor #24 'livingroom-soundbar': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 20 seconds | Type: ping | Down Count: 0 | Resend Interval: 0
2024-02-18T13:24:28+01:00 [MONITOR] WARN: Monitor #17 'Optiplex - Plex IP4': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2024-02-18T13:24:28+01:00 [MONITOR] WARN: Monitor #27 'rpi-printer': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 20 seconds | Type: ping | Down Count: 0 | Resend Interval: 0
2024-02-18T13:24:29+01:00 [MONITOR] WARN: Monitor #11 'Optiplex IP4': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:29+01:00 [MONITOR] WARN: Monitor #18 'Optiplex - Plex IP5': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 60 seconds | Type: http
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/opt/uptime-kuma/node_modules/knex/lib/client.js:312:26)
    at async Runner.ensureConnection (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:572:22)
    at async RedBeanNode.getRow (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:558:22)
    at async RedBeanNode.getCell (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:593:19)
    at async Settings.get (/opt/uptime-kuma/server/settings.js:54:21)
    at async UptimeKumaServer.getClientIPwithProxy (/opt/uptime-kuma/server/uptime-kuma-server.js:313:13)
    at async Object.allowRequest (/opt/uptime-kuma/server/uptime-kuma-server.js:122:34) {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'trustProxy', 1 ]
}
    at process.unexpectedErrorHandler (/opt/uptime-kuma/server/server.js:1899:13)
    at process.emit (node:events:517:28)
    at emit (node:internal/process/promises:149:20)
    at processPromiseRejections (node:internal/process/promises:283:27)
    at process.processTicksAndRejections (node:internal/process/task_queues:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
2024-02-18T13:24:30+01:00 [MONITOR] WARN: Monitor #23 'WAN DNS': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 60 seconds | Type: dns
2024-02-18T13:24:30+01:00 [MONITOR] WARN: Monitor #30 'xelmsrv': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 20 seconds | Type: ping | Down Count: 0 | Resend Interval: 0
2024-02-18T13:24:30+01:00 [MONITOR] WARN: Monitor #22 'Pi-Hole WebUI': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 60 seconds | Type: http
2024-02-18T13:24:31+01:00 [MONITOR] WARN: Monitor #13 'Router ping': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:32+01:00 [MONITOR] WARN: Monitor #10 'mobile-xelmobile-a71': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:33+01:00 [MONITOR] WARN: Monitor #14 'rpi-weather-wifi': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:34+01:00 [MONITOR] WARN: Monitor #15 'WAN PING': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:34+01:00 [MONITOR] WARN: Monitor #32 'xelpc-eth': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 20 seconds | Type: ping | Down Count: 0 | Resend Interval: 0
2024-02-18T13:24:35+01:00 [MONITOR] WARN: Monitor #19 'Optiplex - qBittorrent': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: http
2024-02-18T13:24:35+01:00 [MONITOR] WARN: Monitor #26 'Pi-Hole VM': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:36+01:00 [MONITOR] WARN: Monitor #16 'Optiplex - Everything web': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: http
2024-02-18T13:24:36+01:00 [MONITOR] WARN: Monitor #29 'bedroom-tvbox-eth': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:37+01:00 [MONITOR] WARN: Monitor #31 'Optiplex IP5': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:37+01:00 [MONITOR] WARN: Monitor #36 'pihole-vmw': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:39+01:00 [MONITOR] WARN: Monitor #35 'rpi-nups': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
2024-02-18T13:24:40+01:00 [MONITOR] WARN: Monitor #1 'andi-android': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 20 seconds | Type: ping | Down Count: 0 | Resend Interval: 0
2024-02-18T13:24:40+01:00 [MONITOR] WARN: Monitor #3 'corridor-ac': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 20 seconds | Type: ping | Down Count: 0 | Resend Interval: 0
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/opt/uptime-kuma/node_modules/knex/lib/client.js:312:26)
    at async Runner.ensureConnection (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:572:22)
    at async RedBeanNode.getRow (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:558:22)
    at async RedBeanNode.getCell (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:593:19)
    at async Settings.get (/opt/uptime-kuma/server/settings.js:54:21)
    at async UptimeKumaServer.getClientIPwithProxy (/opt/uptime-kuma/server/uptime-kuma-server.js:313:13)
    at async Object.allowRequest (/opt/uptime-kuma/server/uptime-kuma-server.js:122:34) {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'trustProxy', 1 ]
}
    at process.unexpectedErrorHandler (/opt/uptime-kuma/server/server.js:1899:13)
    at process.emit (node:events:517:28)
    at emit (node:internal/process/promises:149:20)
    at processPromiseRejections (node:internal/process/promises:283:27)
    at process.processTicksAndRejections (node:internal/process/task_queues:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
    at Client_SQLite3.acquireConnection (/opt/uptime-kuma/node_modules/knex/lib/client.js:312:26)
    at async Runner.ensureConnection (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:287:28)
    at async Runner.run (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:30:19)
    at async RedBeanNode.normalizeRaw (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:572:22)
    at async RedBeanNode.getRow (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:558:22)
    at async RedBeanNode.getCell (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:593:19)
    at async Settings.get (/opt/uptime-kuma/server/settings.js:54:21)
    at async UptimeKumaServer.getClientIPwithProxy (/opt/uptime-kuma/server/uptime-kuma-server.js:313:13)
    at async Object.allowRequest (/opt/uptime-kuma/server/uptime-kuma-server.js:122:34) {
  sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
  bindings: [ 'trustProxy', 1 ]
}
    at process.unexpectedErrorHandler (/opt/uptime-kuma/server/server.js:1899:13)
    at process.emit (node:events:517:28)
    at emit (node:internal/process/promises:149:20)
    at processPromiseRejections (node:internal/process/promises:283:27)
    at process.processTicksAndRejections (node:internal/process/task_queues:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
@xelemorf xelemorf added the bug Something isn't working label Feb 18, 2024
@CommanderStorm CommanderStorm added help area:core issues describing changes to the core of uptime kuma and removed bug Something isn't working labels Feb 18, 2024
@CommanderStorm
Copy link
Collaborator

A lot of performance improvements (using aggregated vs non-aggregated tables to store heartbeats, enabling users to choose mariadb as a db-backend, pagination of important events) have been made in v2.0 (our next release) resolving™️ this problem-area.
=> I'm going to close this issue

You can subscribe to our releases and get notified when a new release (such as v2.0-beta.0) gets made.
See #4500 for the bugs that need addressing before that can happen.

Meanwhile (the issue is with SQLite not reading data fast enough to keep up):

  • limit how much retention you have configured
  • limit yourself to a reasonable amount of monitors (hardware-dependant, no good measure)
  • don't run on slow disks or disk with high latency like HDDs, SD-cards, USB-Stick attached to a router, ...

@chakflying
Copy link
Collaborator

  • Please provide the Uptime Kuma version instead of the HA addon. If you are unable to access the UI, restart the Uptime Kuma server and the version number will be printed in the logs.

  • What hardware and storage media are you using to run Home Assistant, which I assume you are also running Uptime Kuma on?

@CommanderStorm
Copy link
Collaborator

@xelemorf
If you have some healthchecks enabled, they might be restarting uptime kuma before the database can be vacuumed. You can use this command to delete ALL heartbeats.

$ sqlite3 kuma.db 
sqlite> delete from heartbeat;
sqlite> vacuum; 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:core issues describing changes to the core of uptime kuma help
Projects
None yet
Development

No branches or pull requests

3 participants