This node listens to all KNX telegrams from the selected KNX Ultimate gateway, builds traffic statistics, detects anomalies, and can optionally query an LLM.
Outputs
- Summary/Stats (
msg.payloadJSON) - Anomalies (
msg.payloadJSON) - AI Assistant (
msg.payloadtext, withmsg.summary)
Commands (input)
Send msg.topic:
summary(or empty): emit summary immediatelyreset: clear internal history/countersask: send a question to the configured LLM
For ask, provide the question in msg.prompt (preferred) or msg.payload (string).
Configuration fields
All fields exposed in the KNX AI editor are listed below.
General
- Gateway: KNX Ultimate gateway/config node used as telegram source.
- Name: Node label and dashboard header name.
- Topic: Base topic used in node outputs.
- Open KNX AI Web button: Opens the full KNX AI web dashboard (
/knxUltimateAI/sidebar/page).
Capture
- Capture GroupValue_Write: Capture write telegrams.
- Capture GroupValue_Response: Capture response telegrams.
- Capture GroupValue_Read: Capture read telegrams.
Analysis
- Analysis window (seconds): Main analysis window used for summaries/rates.
- History window (seconds): Retention window for internal telegram history.
- Also archive captured telegrams to disk: Stores captured telegrams in
knxultimatestorage/knxai/history/<node-id>/YYYY-MM-DD.jsonlin addition to RAM. - Disk archive retention (days): Number of days kept on disk before old archive files are deleted automatically.
- Max stored events: Maximum number of telegrams kept in memory.
- Telegrams with
echoed: trueare internal passthrough copies (from the node input pin to its own output pin), not real KNX BUS traffic: exclude them from bus statistics/anomaly analysis. - Auto emit summary (seconds, 0=off): Periodic summary output interval.
- Top list size: Number of top group addresses/sources in summary.
- Detect simple patterns (A -> B): Enable transition/pattern detection.
- Pattern max lag (ms): Max time gap for pattern transition matching.
- Pattern min occurrences: Minimum occurrences before a pattern is reported.
Anomalies
- Rate window (seconds): Sliding time window for anomaly rate checks.
- Max overall telegrams/sec (0=off): Overall bus rate threshold.
- Max telegrams/sec per GA (0=off): Per-group-address rate threshold.
- Flap window (seconds): Time window for flapping/change-rate detection.
- Max changes per GA in window (0=off): Max allowed changes in flap window.
AI Assistant
- Enable LLM assistant: Enable Ask/chat assistant features.
- Provider: Select LLM backend (OpenAI-compatible or Ollama).
- Endpoint URL: Chat/completions endpoint URL.
- API key: API key (not required for local Ollama).
- Model: Model ID/name.
- System prompt: Global instruction for KNX analysis behavior (Advanced).
- If disk archive is enabled, Ask uses the archive by default: explicit dates/ranges are honored, otherwise the assistant searches the last 24 hours plus current RAM events.
- Include raw payload hex: Include raw telegram hex in prompt.
- Include Node-RED project inventory: Include the whole Node-RED project inventory in the prompt, including KNX nodes and other useful nodes such as function/change/inject/template when they contain KNX-related logic or group addresses.
- Include documentation snippets (help/README/examples): Include docs context.
- Docs language: Preferred language for docs snippets.
- Refresh button: Query provider and load available model IDs.
Advanced
- Analysis window (seconds): Main analysis window used for summaries/rates.
- Max stored events: Maximum number of telegrams kept in memory.
- Top list size: Number of top group addresses/sources in summary.
- Pattern max lag (ms): Max time gap for pattern transition matching.
- Pattern min occurrences: Minimum occurrences before a pattern is reported.
- Rate window (seconds): Sliding time window for anomaly rate checks.
- Max overall telegrams/sec (0=off): Overall bus rate threshold.
- Max telegrams/sec per GA (0=off): Per-group-address rate threshold.
- Flap window (seconds): Time window for flapping/change-rate detection.
- Max changes per GA in window (0=off): Max allowed changes in flap window.
Ollama quick setup (local)
- Choose Provider = Ollama.
- Default endpoint:
http://localhost:11434/api/chat. - If no local models are found, use:
- 1) Download model: opens the Model library page.
- 2) Install it: downloads and installs the model locally (for example
llama3.1).
- During model refresh/install, KNX AI also tries to auto-start the Ollama server when possible.
- If install fails with connection errors, ensure Ollama is running (desktop app or
ollama serve). - If Node-RED runs in Docker, use
host.docker.internalinstead oflocalhostin the endpoint URL.
Security note
If LLM is enabled, KNX traffic context can be sent to the configured endpoint. Use local providers if you need strict on-prem data handling.