Add hivi speaker integration#165307
Conversation
There was a problem hiding this comment.
Hi @swansmart
It seems you haven't yet signed a CLA. Please do so here.
Once you do that we will be able to review and accept this pull request.
Thanks!
|
Please take a look at the requested changes, and use the Ready for review button when you are done, thanks 👍 |
There was a problem hiding this comment.
Pull request overview
Adds a new hivi_speaker integration to Home Assistant Core, including discovery, media player support, master/slave sync control, services, translations, and initial config/option flows.
Changes:
- Introduces the
hivi_speakerintegration (config flow, discovery scheduler, device manager/registry, media player, switch entities, group coordinator). - Adds services (
refresh_discovery,postpone_discovery,remove_device) and English/Chinese translations. - Adds config flow test coverage and fixtures for the new integration.
Reviewed changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 18 comments.
Show a summary per file
| File | Description |
|---|---|
homeassistant/components/hivi_speaker/__init__.py |
Sets up/unloads the integration and handles device removal behavior. |
homeassistant/components/hivi_speaker/config_flow.py |
Implements single-instance config flow and an options flow to trigger discovery refresh. |
homeassistant/components/hivi_speaker/const.py |
Defines domain and discovery-related constants. |
homeassistant/components/hivi_speaker/device.py |
Defines the device and slave-device models and capability helpers. |
homeassistant/components/hivi_speaker/device_data_registry.py |
Persists extra device data via Store and reacts to device registry changes. |
homeassistant/components/hivi_speaker/device_manager.py |
Core orchestration for discovery, entity creation/removal, and status updates. |
homeassistant/components/hivi_speaker/discovery_scheduler.py |
Implements periodic SSDP-based discovery and adaptive scheduling. |
homeassistant/components/hivi_speaker/group_coordinator.py |
Coordinates master/slave grouping operations and verification. |
homeassistant/components/hivi_speaker/media_player.py |
Implements MediaPlayerEntity with DLNA browsing/playback and local media browsing. |
homeassistant/components/hivi_speaker/switch.py |
Implements switch entities for controlling master/slave sync relationships. |
homeassistant/components/hivi_speaker/services.py |
Registers integration services and dispatches to the device manager. |
homeassistant/components/hivi_speaker/services.yaml |
Documents the services and their fields for the UI. |
homeassistant/components/hivi_speaker/manifest.json |
Declares metadata/dependencies for the integration. |
homeassistant/components/hivi_speaker/translations/en.json |
English strings for config/options flows. |
homeassistant/components/hivi_speaker/translations/zh-Hans.json |
Simplified Chinese strings for config/options flows. |
tests/components/hivi_speaker/conftest.py |
Test fixtures for config entries and setup patching. |
tests/components/hivi_speaker/test_config_flow.py |
Tests for config flow and options flow behavior. |
tests/components/hivi_speaker/__init__.py |
Marks the test package for the integration. |
| async def device_registry_updated(event): | ||
| _LOGGER.debug("Device registry updated event: %s", event.data) | ||
| ha_device_id = event.data["device_id"] | ||
| action = event.data["action"] | ||
| if action == "remove": | ||
| # Device deleted - delete current switch if slave device is removed | ||
| slave_device_dict = self._device_manager.device_data_registry.get_device_dict_by_ha_device_id( | ||
| ha_device_id=ha_device_id | ||
| ) | ||
| if slave_device_dict is not None: | ||
| speaker_device_id = slave_device_dict.get("speaker_device_id") | ||
| if speaker_device_id == self._slave_speaker_device_id: | ||
| _LOGGER.debug( | ||
| "Removing switch entity for deleted slave device %s", | ||
| speaker_device_id, | ||
| ) | ||
| # Remove from hub | ||
| self._hub.switches.pop(self._attr_unique_id, None) | ||
| # Remove entity from Home Assistant | ||
| if self.hass and hasattr(self, "async_remove"): | ||
| await self.async_remove() | ||
|
|
||
| self.hass.bus.async_listen("device_registry_updated", device_registry_updated) |
There was a problem hiding this comment.
The device_registry_updated listener looks up the removed device in device_data_registry using the HA device id. However DeviceDataRegistry also listens to the same event and pops that data on remove, so this lookup can race and return None, preventing switch cleanup. Consider using the device registry identifiers directly (or ensure the registry mapping is still available when handling the event) and store/unsubscribe the listener in async_will_remove_from_hass to avoid leaks.
| from homeassistant.exceptions import HomeAssistantError | ||
| from homeassistant.helpers.dispatcher import async_dispatcher_send | ||
| from homeassistant.helpers.entity import EntityCategory | ||
|
|
||
| from .const import DOMAIN | ||
| from .device import ConnectionStatus, HIVIDevice, SlaveDeviceInfo |
There was a problem hiding this comment.
There are unused imports here (HomeAssistantError, SlaveDeviceInfo) which will fail Ruff/flake checks (F401). Please remove them or use them.
| from homeassistant.exceptions import HomeAssistantError | |
| from homeassistant.helpers.dispatcher import async_dispatcher_send | |
| from homeassistant.helpers.entity import EntityCategory | |
| from .const import DOMAIN | |
| from .device import ConnectionStatus, HIVIDevice, SlaveDeviceInfo | |
| from homeassistant.helpers.dispatcher import async_dispatcher_send | |
| from homeassistant.helpers.entity import EntityCategory | |
| from .const import DOMAIN | |
| from .device import ConnectionStatus, HIVIDevice |
| @property | ||
| def available(self) -> bool: | ||
| """Whether the switch is available""" | ||
| master_device = self.get_master_device() | ||
| return ( | ||
| master_device.connection_status == ConnectionStatus.ONLINE | ||
| and master_device.can_be_master | ||
| ) |
There was a problem hiding this comment.
available assumes get_master_device() always returns a device, but get_master_device() can return None. In that case master_device.connection_status will raise. Make available resilient by handling the None case (e.g., return False when the master device is missing).
| """Get master device object""" | ||
| return self._master | ||
|
|
||
| @property | ||
| def slave_device(self) -> HIVIDevice: | ||
| """Get slave device object""" | ||
| return self._slave | ||
|
|
There was a problem hiding this comment.
master_device/slave_device properties return self._master and self._slave, but those attributes are never set in __init__. Accessing these properties will raise AttributeError; either initialize and maintain these attributes or remove the unused properties.
| """Get master device object""" | |
| return self._master | |
| @property | |
| def slave_device(self) -> HIVIDevice: | |
| """Get slave device object""" | |
| return self._slave | |
| """Get master device object.""" | |
| return self.get_master_device() | |
| @property | |
| def slave_device(self) -> HIVIDevice | None: | |
| """Get slave device object. | |
| Returns None if the slave device cannot be found in the registry. | |
| """ | |
| slave_device_dict = ( | |
| self._device_manager.device_data_registry.get_device_dict_by_speaker_device_id( | |
| self._slave_speaker_device_id | |
| ) | |
| ) | |
| if slave_device_dict is None: | |
| _LOGGER.error( | |
| "Cannot find information for slave device %s", | |
| self._slave_speaker_device_id, | |
| ) | |
| return None | |
| return HIVIDevice(**slave_device_dict) |
| async def _handle_operation_success(self, operation_id: str): | ||
| """Handle operation success""" | ||
| operation = self._pending_operations.get(operation_id) | ||
| if not operation: | ||
| return | ||
|
|
||
| # Update operation status | ||
| operation["status"] = "success" | ||
| operation["end_time"] = datetime.now() | ||
|
|
||
| duration = (operation["end_time"] - operation["start_time"]).total_seconds() | ||
|
|
||
| _LOGGER.debug( | ||
| "operation: %s (time: %.1f seconds, try num: %d)", | ||
| operation_id, | ||
| duration, | ||
| operation["retry_count"], | ||
| ) | ||
|
|
||
| # Send operation success event | ||
| self.hass.bus.async_fire( | ||
| f"{DOMAIN}_group_operation_succeeded", | ||
| { | ||
| "operation_id": operation_id, | ||
| "master": operation["master"], | ||
| "slave": operation["slave"], | ||
| "action": operation["type"], | ||
| "duration": duration, | ||
| "retry_count": operation["retry_count"], | ||
| "timestamp": datetime.now().isoformat(), | ||
| }, | ||
| ) | ||
|
|
||
| # Trigger immediate discovery to ensure status synchronization | ||
| _LOGGER.debug("Trigger immediate discovery to synchronize status") | ||
| # self.discovery_scheduler.schedule_immediate_discovery(force=False) | ||
|
|
||
| # Clean up operation | ||
| await self._cleanup_operation(operation_id) | ||
|
|
||
| async def _handle_operation_timeout(self, operation_id: str): | ||
| """Handle operation timeout""" | ||
| operation = self._pending_operations.get(operation_id) | ||
| if not operation: | ||
| return | ||
|
|
||
| # Update operation status | ||
| operation["status"] = "timeout" | ||
| operation["end_time"] = datetime.now() | ||
|
|
||
| _LOGGER.warning("Operation timeout: %s", operation_id) | ||
|
|
||
| # Send operation timeout event | ||
| self.hass.bus.async_fire( | ||
| f"{DOMAIN}_group_operation_timeout", | ||
| { | ||
| "operation_id": operation_id, | ||
| "master": operation["master"], | ||
| "slave": operation["slave"], | ||
| "action": operation["type"], | ||
| "duration": self._operation_timeout, | ||
| "timestamp": datetime.now().isoformat(), | ||
| }, | ||
| ) | ||
|
|
||
| # Even if timeout, trigger discovery to get current status | ||
| _LOGGER.debug("Operation timeout, trigger discovery to get current status") | ||
| # await self.discovery_scheduler.schedule_immediate_discovery(force=False) | ||
|
|
||
| # Clean up operation | ||
| await self._cleanup_operation(operation_id) | ||
|
|
There was a problem hiding this comment.
_handle_operation_success, _handle_operation_timeout, and _cleanup_operation are defined twice in this class. The later definitions overwrite the earlier ones, which is error-prone and makes it unclear which logic is intended (e.g., callback cleanup differs). Please remove the duplicates and keep a single canonical implementation.
| params = operation_data.get("params", {}) | ||
| slave_ip = params.get("slave_ip", "172.18.8.207") | ||
| ssid = params.get("ssid", "5357414E204C532D3130305F30353139") | ||
| wifi_channel = params.get("wifi_channel", 1) | ||
| auth = params.get("auth", "WPAPSKWPA2PSK") | ||
| encry = params.get("encry", "AES") | ||
| psk = params.get("psk", "12345678") | ||
| master_ip = params.get("master_ip", "172.18.8.205") | ||
| uuid = params.get("uuid", "FF31F0121338FA6FEED60519") |
There was a problem hiding this comment.
_execute_operation falls back to hard-coded IPs/SSID/PSK when required params are missing. If a param is accidentally omitted, this can target the wrong host/network credentials. Prefer failing fast (reject the operation) when required params are missing, rather than using defaults that represent real-looking values.
| "integration_type": "hub", | ||
| "iot_class": "local_polling", | ||
| "issue_tracker": "https://github.com/home-assistant/core/issues", | ||
| "requirements": ["hivico>=0.1.0"], |
There was a problem hiding this comment.
Home Assistant integrations pin PyPI dependencies to an exact version in manifest.json (e.g., async-upnp-client==...). Using a range (hivico>=0.1.0) can lead to untested versions being installed. Please pin to a specific tested version.
| "requirements": ["hivico>=0.1.0"], | |
| "requirements": ["hivico==0.1.0"], |
| success = await domain_data["device_manager"].remove_device( | ||
| speaker_device_id | ||
| ) |
There was a problem hiding this comment.
async_handle_remove_device calls domain_data["device_manager"].remove_device(...), but HIVIDeviceManager does not implement a remove_device method (only async_remove_device_with_entities). This will raise AttributeError when the service is called; either implement remove_device on the manager or update the service handler to call the existing removal API.
| success = await domain_data["device_manager"].remove_device( | |
| speaker_device_id | |
| ) | |
| success = await domain_data[ | |
| "device_manager" | |
| ].async_remove_device_with_entities(speaker_device_id) |
| from dataclasses import dataclass | ||
| from typing import Dict, List, Optional, Set | ||
| import asyncio |
There was a problem hiding this comment.
This module has several unused imports (dataclass, Dict, List, Set, asyncio) which will fail Ruff/flake checks (F401) in Home Assistant. Please remove the unused imports.
| from dataclasses import dataclass | |
| from typing import Dict, List, Optional, Set | |
| import asyncio | |
| from typing import Optional |
| def __post_init__(self): | ||
| """Post-initialization processing""" | ||
| if not self.unique_id: | ||
| self.unique_id = f"hivi_{self.mac_address.replace(':', '')}" | ||
|
|
There was a problem hiding this comment.
HIVIDevice inherits from Pydantic BaseModel, so __post_init__ will not be called. If you need to auto-generate unique_id when missing, use Pydantic validation (e.g., model_post_init / validators) or set the default when constructing the model.
Breaking change
Proposed change
Add HiVi Speaker integration into Core
Type of change
Additional information
Checklist
ruff format homeassistant tests)If user exposed functionality or configuration variables are added/changed:
If the code communicates with devices, web services, or third-party tools:
Updated and included derived files by running:
python3 -m script.hassfest.requirements_all.txt.Updated by running
python3 -m script.gen_requirements_all.To help with the load of incoming pull requests: