I bought some lightbulbs. Nice ones. Wiz colour-changing LEDs that can do 16 million colours, warm whites, cool whites, animated scenes, the lot. I screwed them in, downloaded the app, and watched my phone send a request to an AWS server, probably in Frankfurt, so that the server could send a command back to my house, through my router, to a lightbulb that was three metres away from me.
To turn on a light. In my own house. Via Germany.
I sat there for a moment, staring at the app, and thought: absolutely not.
I bought these bulbs. They’re in my house. They’re on my network. I should be able to talk to them directly without asking permission from a server in another country.
So I built wiz-lights, a command-line tool that controls Wiz smart bulbs entirely over the local network using raw UDP. No cloud. No app. No phone-home. Just packets on my LAN.
This is the story of how it works, what went wrong, and why I think the entire consumer IoT model is fundamentally broken.
The IoT security problem (a brief rant)
Before we get into the code, let me explain why this matters beyond “I don’t like apps.”
Every smart device in your house that phones home to a cloud server is a security liability. The bulb itself runs a tiny embedded HTTP and UDP server. The cloud service mediates between the app on your phone and the device on your network. That means:
Your lightbulb has an attack surface. It’s a networked computer running firmware that you can’t audit, can’t update on your own schedule, and can’t firewall properly because it needs to reach the cloud to function.
The cloud service is a single point of failure. When Wiz’s AWS infrastructure goes down, your lights stop responding to the app. You bought a lightbulb that can be bricked by an outage in a data centre you’ve never visited.
Your usage data leaves your house. When you turn on a light, that event goes to the cloud. When you change colour at 11pm, that’s logged. Your lighting patterns are data, and data is product. Signify’s privacy policy makes it clear that data may be processed across regions, including the United States [1].
The vendor can end-of-life your hardware. When the cloud service shuts down, and eventually it will, the “smart” part of your smart bulb dies. You’re left with a dumb bulb that cost four times what a normal bulb costs.
I run an OpenBSD firewall as my home router, which gives me full visibility and control over what traffic enters and leaves my network (but that’s a post for another day). When I saw my Wiz bulbs making outbound connections to AWS endpoints, I started thinking about how to keep everything local.
The Wiz protocol: it’s just UDP
Here’s the good news: Wiz bulbs speak a beautifully simple local protocol. They listen on UDP port 38899 and accept JSON-formatted commands. No authentication. No encryption. No handshake. Just fire a UDP packet at the bulb’s IP address and it does what you tell it.
The basic command structure looks like this:
{"method": "setPilot", "params": {"state": true}}
That turns the light on. Want to set it to a specific RGB colour?
{"method": "setPilot", "params": {"r": 255, "g": 100, "b": 50}}
Brightness:
{"method": "setPilot", "params": {"dimming": 75}}
Colour temperature in Kelvin:
{"method": "setPilot", "params": {"temp": 2700}}
Query the current state:
{"method": "getPilot"}
It’s so simple it almost feels like a mistake. No OAuth tokens, no API keys, no session management. Just UDP datagrams containing JSON. The protocol team at Wiz either built this as a debug interface that accidentally shipped, or they made a deliberate decision to keep the local protocol open while monetising the cloud layer. Either way, I’m grateful.
Discovery works via UDP broadcast. Send a getSystemConfig request to 255.255.255.255:38899 and every Wiz device on your subnet responds with its IP, MAC address, firmware version, and capabilities. It’s essentially the same pattern as mDNS service discovery, but proprietary.
Architecture: keeping it clean
I wanted wiz-lights to be more than a hacky script. I wanted a proper CLI tool with a clean architecture that I could maintain and extend. Here’s how it’s structured:
wiz-lights/
├── src/wiz_lights/
│ ├── cli.py # Click CLI entry point
│ ├── core.py # LightManager — target resolution & async commands
│ ├── config.py # JSON config persistence, dataclasses
│ ├── transport.py # Raw UDP transport layer
│ ├── discovery.py # Network discovery wrapper
│ ├── moods.py # 35+ scene mappings & validation
│ ├── scheduler.py # Schedule parsing & solar calculations
│ ├── dashboard.py # Textual TUI
│ └── exceptions.py # Error hierarchy
The key design principle is that the core library has no awareness of the CLI or TUI. Both interfaces call the same LightManager methods. If I wanted to add a REST API tomorrow, I could do it without touching the core logic. This might sound like over-engineering for a lightbulb controller, but I’ve been burned enough times by tangled interface code to know that separating concerns early saves pain later.
The LightManager class is the heart of it. It resolves target names (a light name, a group name, or “all”) to IP addresses, creates WizLight transport objects, and fans out commands concurrently:
class LightManager:
async def set_brightness(self, target: str, brightness: int) -> list[tuple[str, bool]]:
lights = self.resolve_target(target)
results: list[tuple[str, bool]] = []
async with asyncio.TaskGroup() as tg:
for name, ip in lights:
async def _cmd(n: str = name, i: str = ip) -> None:
light = self._get_light(i)
await light.turn_on(PilotBuilder(brightness=brightness))
results.append((n, True))
tg.create_task(_cmd())
return results
The asyncio.TaskGroup is the elegant bit. When you say wiz brightness living-room 75 and “living-room” is a group of four bulbs, all four commands fire concurrently. The TaskGroup waits for all of them to complete before returning. No sequential waiting, no manual task management. It’s one of those Python 3.11 features that makes you wonder how we ever lived without it.
The macOS UDP nightmare
This is the part of the project where I lost two days and most of my patience.
The pywizlight library, which handles the low-level Wiz protocol, sends unicast UDP packets to individual bulbs on port 38899. On Linux, this works exactly as you’d expect. Open a socket, send a datagram, receive the response. Standard BSD sockets stuff that has worked since the 1980s.
On macOS, it doesn’t work. Or rather, it doesn’t work if your Python binary was installed via uv (or Homebrew, or pyenv, or anything that isn’t Apple’s own system Python).
The reason is macOS’s code signing and sandboxing model. Apple-signed binaries get full network access. Adhoc-signed binaries, which is what you get when you install Python via any package manager, are restricted from certain network operations. Unicast UDP to arbitrary ports on local network devices is one of those restricted operations.
The symptom is maddening: the socket opens fine, the sendto() call succeeds, but no packet actually leaves the machine. No error, no exception, just silence. The bulb never receives the command because macOS silently drops the packet at the network layer.
I spent an embarrassingly long time debugging this before I figured out what was happening. tcpdump showed no outbound packets. wireshark showed nothing. The Python code was correct, the socket was configured correctly, the destination was reachable via ping. Everything looked right, and nothing worked.
The solution I landed on is, I’ll admit, a bit unhinged. But it works:
_SEND_SCRIPT = """
import socket, sys, json
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.settimeout(3)
sock.sendto(json.dumps({cmd}).encode(), ('{ip}', {port}))
try:
data, _ = sock.recvfrom(1024)
print(data.decode())
except socket.timeout:
print('TIMEOUT')
finally:
sock.close()
"""
async def _send_via_subprocess(ip: str, port: int, cmd: dict) -> str | None:
script = _SEND_SCRIPT.format(cmd=cmd, ip=ip, port=port)
proc = await asyncio.to_thread(
subprocess.run,
["/usr/bin/python3", "-c", script],
capture_output=True, text=True, timeout=5,
)
return proc.stdout.strip() if proc.stdout else None
When running on macOS, the transport layer spawns /usr/bin/python3, Apple’s system Python, as a subprocess and has IT send the UDP packet. The system Python is Apple-signed, so it gets full network access. The command is passed as an inline script, so there’s no external file to manage. The response is captured from stdout.
Is this elegant? Debatable. Does it work? Absolutely. And the detection happens at import time, so there’s no runtime overhead from checking the platform on every command:
_SYSTEM_PYTHON = shutil.which("/usr/bin/python3")
def _check_system_python() -> bool:
if sys.platform != "darwin" or _SYSTEM_PYTHON is None:
return False
# ... verify it can actually send UDP
return True
On Linux, the transport uses direct sockets as nature intended. The macOS workaround is completely transparent to the rest of the codebase, the WizLight class exposes the same async interface regardless of which send mechanism is in use.
Moods, scenes, and the joy of naming things
Wiz bulbs support 35+ built-in animated scenes, from “Ocean” to “Fireplace” to “Christmas.” Each scene has a numeric ID that the bulb understands, but nobody wants to type wiz scene bedroom 5 when they could type wiz scene bedroom fireplace.
The moods.py module maintains a bidirectional mapping between scene names and IDs:
SCENE_MAP: dict[int, str] = {
1: "Ocean",
2: "Romance",
3: "Sunset",
4: "Party",
5: "Fireplace",
# ... 30+ more
1000: "Rhythm",
}
On top of scenes, I added “moods,” which are named presets that can combine multiple settings and apply them to any target. The defaults ship with four:
- calming: purple RGB at 40% brightness
- party: the Party scene at speed 150
- focus: cool white (4500K) at 80% brightness
- night: warm white (2700K) at 10%
Creating a custom mood is one command:
wiz mood create movie --rgb 20 10 30 --brightness 15
wiz mood apply movie living-room
Moods are stored as frozen dataclasses, which means they’re immutable once created. This prevents accidental mutation and makes serialisation predictable. The validation layer catches impossible combinations (RGB values outside 0-255, colour temperatures outside 1000-10000K) before they reach the bulb.
Solar scheduling: sunrise in Larnaca
The scheduler is probably the most over-engineered part of the project, and I’m not sorry about it.
You can schedule lights to turn on or off at fixed times:
wiz schedule add office on --at 08:00 --repeat mon,tue,wed,thu,fri
Or set a relative timer:
wiz schedule add bedroom off --in 30m
But the bit I’m most pleased with is solar-based triggers. Using the astral library and a configured latitude/longitude, the scheduler can calculate sunrise and sunset times for any location on any date, and trigger actions relative to them:
wiz schedule add porch on --at sunset-30m
wiz schedule add bedroom off --at sunrise+1h
The parsing handles the offset arithmetic:
@dataclass(frozen=True)
class SolarOffset:
event: str # "sunrise" or "sunset"
offset_seconds: int # positive or negative
def parse_solar_trigger(value: str) -> SolarOffset:
match = re.match(r"(sunrise|sunset)([+-]\d+[hm](?:\d+[m])?)?$", value)
# ... parse offset into seconds
return SolarOffset(event=event, offset_seconds=total)
The compute_solar_time() function uses astral to get the actual sunrise/sunset time for the configured location and then applies the offset:
def compute_solar_time(trigger: SolarOffset, lat: float, lon: float,
date: datetime.date | None = None) -> datetime.time:
loc = LocationInfo(latitude=lat, longitude=lon)
s = sun(loc.observer, date=date or datetime.date.today())
base = s["sunrise"] if trigger.event == "sunrise" else s["sunset"]
adjusted = base + datetime.timedelta(seconds=trigger.offset_seconds)
return adjusted.time()
In Larnaca, sunset in October is around 17:45. sunset-30m resolves to 17:15, and my porch lights come on before it gets dark. In June, sunset is 20:15, and the same schedule adjusts automatically. No manual updating, no seasonal clock changes. The maths just works.
The scheduler itself runs as a daemon, checking every 60 seconds if any schedule is due. It reloads the config from disk on each iteration, so you can add or modify schedules without restarting it:
async def run_scheduler(config_path: Path) -> None:
while True:
cfg = load_config(config_path)
now = datetime.datetime.now()
for sched in cfg.schedules:
if sched.enabled and is_schedule_due(sched, now):
mgr = LightManager(cfg)
# execute action...
await asyncio.sleep(60)
The TUI dashboard
Because a CLI is good but a dashboard is better, I built a terminal user interface using Textual. It shows all your lights in a table with real-time state, brightness, colour, and active scene:
┌──────────┬───────┬────────────┬──────────────┬───────┐
│ Light │ State │ Brightness │ Color │ Scene │
├──────────┼───────┼────────────┼──────────────┼───────┤
│ bedroom │ ON │ 75% │ 2700K │ │
│ office │ ON │ 80% │ 4500K │ │
│ porch │ OFF │ │ │ │
│ lounge │ ON │ 40% │ RGB(200,140) │ Ocean │
└──────────┴───────┴────────────┴──────────────┴───────┘
Groups: all, upstairs, downstairs
Moods: calming, party, focus, night
[o] Toggle [←→] Brightness [r] Refresh [q] Quit
The keyboard controls are simple: navigate with arrow keys, o to toggle the selected light on/off, left/right to adjust brightness by 10%, r to force-refresh state, q to quit. The @work(exclusive=True) decorator ensures that only one async operation runs at a time, so you can’t accidentally send conflicting commands by mashing keys.
The dashboard fetches state for all lights concurrently on startup and on refresh, using the same asyncio.TaskGroup pattern as the core library. On a network with six bulbs, the full state refresh completes in under a second.
Configuration: frozen dataclasses all the way down
The entire configuration layer is built on frozen dataclasses. Every config object, LightConfig, MoodConfig, ScheduleTrigger, ScheduleConfig, AppConfig, is immutable once created. This is a deliberate choice:
@dataclass(frozen=True)
class MoodConfig:
rgb: tuple[int, int, int] | None = None
brightness: int | None = None
colortemp: int | None = None
scene: str | None = None
speed: int | None = None
Immutability means you can’t accidentally modify a mood mid-operation. If you want to change something, you create a new config, validate it, and write it to disk. The load/save cycle is the only mutation point, and it’s explicit.
Config files live in platform-specific locations using platformdirs: ~/Library/Application Support/wiz-lights/ on macOS, ~/.config/wiz-lights/ on Linux. The schema is versioned ("version": 1) so future format changes can be handled with migrations rather than breaking changes.
Testing without real bulbs
You can’t run a test suite that depends on physical lightbulbs responding to UDP packets. Well, you can, but it makes CI interesting.
The solution is dependency injection. The LightManager accepts an optional light_factory parameter:
class LightManager:
def __init__(self, config: AppConfig,
light_factory: Callable[[str], WizLight] | None = None):
self._factory = light_factory or (lambda ip: WizLight(ip))
In tests, a FakeLightFactory provides mock lights that record method calls without sending any packets:
class FakeLightFactory:
def __init__(self):
self.created: dict[str, FakeLight] = {}
def __call__(self, ip: str) -> FakeLight:
light = FakeLight(ip)
self.created[ip] = light
return light
The test suite has 97 unit tests covering config serialisation, target resolution, mood validation, schedule parsing, solar calculations, and every CLI command. Coverage sits at 84%, with exclusions for the TUI (hard to test in CI) and the macOS transport workaround (platform-specific by definition).
Integration tests exist but are marked separately and only run when real bulbs are available on the network. CI runs the unit tests; I run the integration tests from my desk in Larnaca while watching the lights actually change.
What I’d build next
The scheduler’s solar trigger parsing is implemented but not yet wired into the evaluation loop, that’s the obvious next step. I’m also thinking about:
A REST API layer. The clean separation between core and CLI means adding a FastAPI or Flask layer would be straightforward. This would let me control lights from any device on the network without needing the CLI installed.
Home Assistant integration. Wiz bulbs already have a Home Assistant integration via the cloud, but a local-only integration using this library would be more in the spirit of the project.
OpenBSD router integration. My OpenBSD firewall already blocks the bulbs from phoning home. The next step is running wiz-lights directly on the router, so the scheduler doesn’t depend on my laptop being open.
The principle
I started this project because I was offended. I bought lightbulbs, physical objects in my house, and someone decided they should need permission from a server in another country to work. That’s not smart home technology. That’s a subscription model for switching lights on and off.
The Wiz local UDP protocol exists. It’s simple, it’s fast, and it works without the cloud. The fact that the official app doesn’t offer a “local only” mode is a business decision, not a technical one. They COULD let you control your bulbs locally. They CHOOSE not to, because the cloud connection gives them data and a mechanism for vendor lock-in.
In the EU, your property is your property. Once I’ve bought a lightbulb, I should be able to do whatever I want with it, including telling it to stop calling home. If that means reverse-engineering the protocol and building my own control software, so be it.
The code is MIT licensed and on GitHub. Explore everything, modify everything. Once you bought it, you own it.