logic: nest peak shaving config in calculation parameters#357
Conversation
Move PeakShavingConfig from core.py into logic_interface.py and embed it as a nested field on CalculationParameters, replacing the four flat peak_shaving_* fields. The evcc enabled-override in core.py now uses dataclasses.replace, so per-cycle parameter rebuilds no longer rely on the old flat-field copy pattern. The combined+price_limit=None fallback warning moves from __post_init__ to from_config so dataclasses.replace in the per-evaluation build path no longer re-emits it on every cycle. https://claude.ai/code/session_014dHHuZaGbZckiAZfYJNmfi
All four peak shaving parameters are now adjustable at runtime over MQTT. The price_limit topic uses -1 as the documented off-value, matching the algorithm semantics (no slot price <= -1 ever exists) and keeping the Home Assistant number entity round-trip safe. The mode topic is exposed as an HA select entity using the existing discovery helper. Setters validate via dataclasses.replace, so an invalid payload leaves the previous configuration in place. https://claude.ai/code/session_014dHHuZaGbZckiAZfYJNmfi
There was a problem hiding this comment.
Pull request overview
This PR refactors peak shaving configuration by moving PeakShavingConfig into the logic layer and nesting it under CalculationParameters, while also extending MQTT/HA runtime controls to cover peak shaving mode and price_limit (including a None -> -1 roundtrip mapping).
Changes:
- Move
PeakShavingConfigtobatcontrol.logic.logic_interfaceand replace flatCalculationParameters.peak_shaving_*fields withCalculationParameters.peak_shaving. - Update
core.pyto build per-cycle calc parameters viadataclasses.replace(...)(evcc override path) and add API/MQTT setters for peak shavingmodeandprice_limit. - Extend MQTT publishing + HA discovery and update tests/scripts/docs accordingly.
Reviewed changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/batcontrol/test_peak_shaving_config.py | Updates imports and shifts fallback-warning assertions to from_config logger/path. |
| tests/batcontrol/test_mqtt_api.py | Adds tests for Batcontrol.api_set_peak_shaving_price_limit / api_set_peak_shaving_mode. |
| tests/batcontrol/test_core.py | Updates EVCC-related calc-parameter construction to use nested peak_shaving. |
| tests/batcontrol/logic/test_peak_shaving.py | Migrates peak shaving tests to nested config usage; removes old flat-field tests. |
| tests/batcontrol/logic/test_min_grid_charge_soc_live_cases.py | Drops obsolete peak_shaving_enabled arg now that peak shaving is nested with defaults. |
| src/batcontrol/mqtt_api.py | Adds publish + HA discovery entries for peak shaving price_limit and mode. |
| src/batcontrol/logic/next.py | Switches peak shaving accessors to calculation_parameters.peak_shaving.*. |
| src/batcontrol/logic/logic_interface.py | Introduces new PeakShavingConfig and nests it in CalculationParameters. |
| src/batcontrol/logic/init.py | Re-exports PeakShavingConfig from the logic package. |
| src/batcontrol/core.py | Removes old PeakShavingConfig, uses nested config + dataclasses.replace, adds setters + MQTT callbacks. |
| scripts/simulate_peak_shaving_price.py | Updates simulation to pass PeakShavingConfig via CalculationParameters.peak_shaving. |
| scripts/simulate_peak_shaving_day.py | Same as above for day simulation. |
| docs/WIKI_peak_shaving.md | Documents new MQTT topics/entities and clarifies runtime changes are not persisted. |
| class TestPeakShavingPriceBased(unittest.TestCase): | ||
| """Tests for _calculate_peak_shaving_charge_limit_price_based.""" | ||
|
|
There was a problem hiding this comment.
The old CalculationParameters peak-shaving field tests were removed, but there are no equivalent assertions for the new nested CalculationParameters.peak_shaving (defaults and basic validation/wiring). Add a small test that constructs CalculationParameters with no explicit peak_shaving and verifies peak_shaving.enabled/mode/allow_full_battery_after/price_limit defaults, plus one case that passing a PeakShavingConfig with invalid values raises as expected.
There was a problem hiding this comment.
Bewusst nicht aufgenommen. Die geloeschten Tests in TestCalculationParametersPeakShaving haben die Validierung von CalculationParameters.__post_init__ getestet — die ist mit dem Refactor in PeakShavingConfig.__post_init__ gewandert und wird von tests/batcontrol/test_peak_shaving_config.py aequivalent abgedeckt (12 Validierungs-Tests, inkl. mode/allow_full_battery_after/price_limit-Range, Bool-Rejection und from_config-Pfad).
Die Default-Werte des nested Felds sind triviale Dataclass-Defaults aus PeakShavingConfig — sie werden bereits durch jeden CalculationParameters(...)-Aufruf in der Test-Suite indirekt verwendet. Ein expliziter "verify defaults"-Test waere reine Symmetrie-Migration.
Der Fix fuer den echten Bug (bool-Input via Setter) ist als separater Commit nachgereicht.
Generated by Claude Code
| # single calculation cycle via ``dataclasses.replace`` in core.py. | ||
| peak_shaving: PeakShavingConfig = field(default_factory=PeakShavingConfig) | ||
|
|
||
| def __post_init__(self): |
There was a problem hiding this comment.
CalculationParameters now accepts a nested peak_shaving object, but post_init does not validate that peak_shaving is actually a PeakShavingConfig instance. If a caller accidentally passes a dict/None, the failure will occur later as an AttributeError in logic code. Consider adding an isinstance check in CalculationParameters.post_init (and raising a ValueError with a clear message) to keep configuration/parameter errors fail-fast and user-friendly.
| def __post_init__(self): | |
| def __post_init__(self): | |
| if not isinstance(self.peak_shaving, PeakShavingConfig): | |
| raise ValueError( | |
| "peak_shaving must be a PeakShavingConfig instance, " | |
| f"got {type(self.peak_shaving).__name__}" | |
| ) |
There was a problem hiding this comment.
Bewusst nicht aufgenommen. CalculationParameters.__post_init__ macht heute fuer keines der anderen Felder einen Type-Check (max_capacity, min_grid_charge_soc, min_price_difference, ...) — ein isolierter isinstance-Check nur fuer peak_shaving wuerde das bestehende Pattern brechen, ohne die Klasse insgesamt fail-fast zu machen.
Bei einem internen Fehlaufruf (alle Konsumenten sind im Repo: core.py, next.py, default.py, die Sim-Scripts) liefert der erste Attribut-Zugriff einen AttributeError mit klarem Stack-Trace, der genau auf die Zeile zeigt — fuer eine internal-only API ausreichend. Ein expliziter Type-Check waere defensive Programmierung gegen einen Caller, der nicht existiert.
Generated by Claude Code
| try: | ||
| new_config = dataclasses.replace( | ||
| self.peak_shaving_config, price_limit=float(price_limit)) | ||
| except (TypeError, ValueError) as exc: |
There was a problem hiding this comment.
api_set_peak_shaving_price_limit currently does float(price_limit) before validating. This means bool inputs (True/False) get silently converted to 1.0/0.0, bypassing PeakShavingConfig's explicit bool rejection for price_limit. Add an explicit bool check (or pass the raw value into dataclasses.replace and let PeakShavingConfig.post_init reject bools) so invalid types don't get coerced into valid numeric limits.
There was a problem hiding this comment.
Behoben in 37703e8: isinstance(price_limit, bool)-Reject im Setter vor dem float(...)-Cast, plus Test (test_bool_inputs_are_rejected) der True/False durchprueft. Damit kann True/False ueber den MQTT-API-Pfad nicht mehr zu price_limit=1.0/0.0 werden — die Bool-Rejection in PeakShavingConfig.__post_init__ greift jetzt auch hier.
Generated by Claude Code
float(True)/float(False) would silently coerce to 1.0/0.0 and bypass the explicit bool rejection in PeakShavingConfig.__post_init__, allowing a nonsensical "boolean price limit" to enter the runtime config. Reject bools in the setter so MQTT-driven updates stay consistent with the dataclass validation. https://claude.ai/code/session_014dHHuZaGbZckiAZfYJNmfi
| Negative values are accepted and effectively disable the | ||
| price-based component (no slot price <= -1 ever exists). |
There was a problem hiding this comment.
The docstring for api_set_peak_shaving_price_limit says that negative values “effectively disable the price-based component”, but the implementation (and docs/tests elsewhere) rely on the specific off-value -1.0. Other negative values (e.g. -0.1) can still match negative tariff slots and therefore do not disable the price component. Please update the wording to reflect that -1 (or <= -1) is the intended off-value, and that arbitrary negative values are merely accepted as numeric inputs.
| Negative values are accepted and effectively disable the | |
| price-based component (no slot price <= -1 ever exists). | |
| Negative values are accepted as numeric inputs. To disable the | |
| price-based component, use -1 or any value <= -1. Arbitrary | |
| negative values above -1 may still match negative tariff slots. |
There was a problem hiding this comment.
Behoben in 2130db9. Docstring schreibt jetzt explizit, dass nur -1 (oder <= -1) als Off-Value funktioniert — Werte wie -0.1 matchen weiterhin negative Tarifslots und sind kein Disable-Switch.
Generated by Claude Code
| def publish_peak_shaving_price_limit(self, price_limit) -> None: | ||
| """ Publish peak shaving price limit to MQTT |
There was a problem hiding this comment.
publish_peak_shaving_price_limit is missing a type annotation for the price_limit argument, while the neighboring peak shaving publish methods are typed. Adding something like Optional[float] (or float | None) would make the API clearer and help static analysis/pylint across the codebase.
There was a problem hiding this comment.
Behoben in 2130db9. Parameter price_limit ist jetzt als Optional[float] annotiert — passt zum dokumentierten None -> -1.0-Mapping im Body und gleicht den Stil der benachbarten publish_*-Methoden an.
Generated by Claude Code
- api_set_peak_shaving_price_limit docstring previously claimed all negative values disable the price-based component; only <= -1 does so, since prices below -1 do not occur in practice. Values like -0.1 still match negative tariff slots. Reword to make the off-value semantics explicit. - publish_peak_shaving_price_limit was missing a parameter type while neighboring publish_* methods are typed. Annotate as Optional[float], matching the None -> -1.0 mapping documented in the body. https://claude.ai/code/session_014dHHuZaGbZckiAZfYJNmfi
| self.base_topic + "/peak_shaving/price_limit/set", | ||
| entity_category="config", | ||
| min_value=-1.0, | ||
| max_value=1.0, |
There was a problem hiding this comment.
Home Assistant auto-discovery for Peak Shaving Price Limit caps the number entity at max_value=1.0, but the application logic (and api_set_peak_shaving_price_limit docstring) accepts any numeric EUR/kWh value. This makes it impossible to set price limits > 1.0 via HA/MQTT even though they are valid. Consider raising the max to a more permissive value (or making it configurable / omitting the max constraint) to avoid artificially restricting users.
| max_value=1.0, |
There was a problem hiding this comment.
Bewusst nicht aufgenommen. Die max_value=1.0-Schranke steht pre-existing in der HA-Auto-Discovery-Payload und ist nicht Bestandteil des peak_shaving-Refactors dieses PRs. Sie schraenkt ausschliesslich den HA-UI-Slider ein — der MQTT-Setter-Topic und der YAML-Config-Pfad akzeptieren beliebige numerische Werte und sind durch den Cap nicht betroffen.
User, die hoehere Limits brauchen, koennen den Wert ueber den set-Topic oder die YAML-Konfiguration setzen — der Cap wirkt nur auf den HA-Slider. Eine Erweiterung waere eine UX-Aenderung an der HA-Discovery (Default-Slider-Range fuer alle bestehenden Installationen) und damit ein eigenes Anliegen.
Die Doku ist in 8a1e5c7 so angepasst, dass das nicht mehr als Application-Limit missverstanden werden kann.
Generated by Claude Code
|
|
||
| - **Peak Shaving Enabled** - switch entity | ||
| - **Peak Shaving Allow Full After** - number entity (0-23, step 1) | ||
| - **Peak Shaving Price Limit** - number entity (-1.0..1.0, step 0.01) |
There was a problem hiding this comment.
The docs describe the HA auto-discovery entity for Peak Shaving Price Limit as (-1.0..1.0). Since peak_shaving.price_limit is not range-limited in code (any numeric EUR/kWh is accepted), documenting a hard 1.0 upper bound is misleading and also reflects the current auto-discovery limitation. If you relax the HA entity max (or remove it), update this line accordingly; otherwise consider documenting the rationale for the 1.0 cap.
| - **Peak Shaving Price Limit** - number entity (-1.0..1.0, step 0.01) | |
| - **Peak Shaving Price Limit** - number entity (-1.0..1.0, step 0.01) for the current HA auto-discovery UI; this is not a hard application limit, and `peak_shaving.price_limit` itself accepts any numeric EUR/kWh value |
There was a problem hiding this comment.
Behoben in 8a1e5c7. Die Doku-Zeile sagt jetzt explizit, dass (-1.0..1.0) die HA-Auto-Discovery-Slider-Range ist und kein Application-Limit — peak_shaving.price_limit selbst akzeptiert beliebige numerische EUR/kWh-Werte via YAML-Config oder MQTT-Setter-Topic. Der HA-UI-Cap bleibt aus Backward-Compat-Gruenden (siehe Reply auf den verbundenen mqtt_api.py:732-Comment).
Generated by Claude Code
The "(-1.0..1.0)" range on the Peak Shaving Price Limit number entity was easy to read as an application-level constraint. The cap exists only in the HA auto-discovery payload to keep the slider usable; the underlying peak_shaving.price_limit field accepts any numeric EUR/kWh value through the YAML config or the MQTT setter topic. Spell that out so users do not assume 1.0 is enforced application-side. https://claude.ai/code/session_014dHHuZaGbZckiAZfYJNmfi
The previous wording said "this range is HA-UI-only", which was correct but not motivated. Add the domain rationale: peak_shaving.price_limit is a cheap-slot threshold, so 1 EUR/kWh is already an unusually high cutoff in practice. That explains why the HA auto-discovery slider stops at 1.0 even though the underlying field accepts arbitrary values. https://claude.ai/code/session_014dHHuZaGbZckiAZfYJNmfi
Move PeakShavingConfig from core.py into logic_interface.py and embed it
as a nested field on CalculationParameters, replacing the four flat
peak_shaving_* fields. The evcc enabled-override in core.py now uses
dataclasses.replace, so per-cycle parameter rebuilds no longer rely on
the old flat-field copy pattern.
The combined+price_limit=None fallback warning moves from post_init
to from_config so dataclasses.replace in the per-evaluation build path
no longer re-emits it on every cycle.
https://claude.ai/code/session_014dHHuZaGbZckiAZfYJNmfi