I’ve been a long time user of Zoneminder but not any more.
I’ve written extensively about my use of Zoneminder, particularly in combination with zmEventNotification for object detection. It’s a pretty solid system and has served me well. I started out with a very basic setup – a USB webcam attached to my home server pointing out of the window. That’s grown to four IP cameras, each of which has both a high and low resolution feed. I’ve gone beyond basic motion detection using zones to a system that detects objects so that I can eliminate false positives and only get alerted when a person is detected.
That’s where Zoneminder starts to fall down. It has always been quite CPU intensive, as it breaks the incoming video stream from the camera into individual frames to perform the motion capture and analysis.
The ideal solution would be to perform the analysis on the low resolution stream, at a low frame rate, and then, if motion is detected, to capture high resolution video at a higher frame rate for a smooth video.
The latest version of Zoneminder does sort of support that scenario. It has always been possible to link cameras so that motion detected by one camera triggers recording from another. By setting up the high and low resolution streams as separate cameras, you can perform the analysis on the low resolution camera and then have that trigger recording from the high resolution camera. In the latest version, you don’t then need to decode the high resolution stream, although you still need to run the analysis process.
It’s still quite CPU heavy and a clunky solution. You have to define each camera twice – once for each feed. You then set low resolution camera to “modect” for motion detection. The high resolution camera is then set to “nodect”, and linked to the low resolution camera.
That means you’ll end up with a cluttered user interface with twice as many monitors as you actually have cameras:
If you disable decoding on the high resolution feeds in order to reduce CPU use, those cameras will just be blank in the interface.
I had ended up running Zoneminder with just the low resolution feeds, which were good enough to get an idea of when motion was detected. I was then running 24×7 recording with no motion detection using Surveillance Station on a Synology NAS. If I really wanted a high resolution clip of some motion, I’d have to go back through the Surveillance Station recordings. I did look into whether I could script something to extract the recordings from the NAS when Zoneminder detected motion, but never really got anywhere with it.
Finally, while the object detection provided by zmEventNotification is good, it’s also quite CPU heavy even on low resolution feeds.
The Coral device is a game changer. It’s a dedicated device that takes the object detection load off of the PC altogether. To be fair to zmEventNotification, there is now support for the Coral device, but Frigate has some other advantages.
The first of these is the way that it handles multiple feeds from a camera. Instead of treating each feed as a separate camera, as Zoneminder does, you can define multiple feeds for a single camera. Each feed is defined in the Frigate configuration file as an input.
Each of these inputs can have roles assigned to it – so, for example, you can assign the “detect” role to the low resolution feed and the “clips” role to the high resolution feed to have the combination of CPU efficient motion detection and high quality recordings. Other roles include the “rtmp” role which provides a passthrough of the video stream, and is needed if you want to show a live feed within Home Assistant.
For example, my front garden camera has the “detect” and “rtmp” roles assigned to the low resolution feed, with the “clips” role assigned to the high resolution feed:
cameras: front_garden: ffmpeg: inputs: - path: rtsp://<URL>/Streaming/Channels/2 roles: - detect - rtmp - path: rtsp:/<URL>/Streaming/Channels/1 roles: - clips
Does CPU efficiency matter with the Coral device? Well, yes, it still does to a certain extent. Frigate does some very basic motion detection before sending images to the Coral for object analysis. It’s not as sophisticated as Zoneminder’s system but it doesn’t need to be – it’s just a basic first pass before handing off to the Coral for the heavy lifting. It does allow you to do simple things like mask off areas that you’re not interested in. For example, the camera that covers my front garden includes a little bit of the street beyond. I can set up a mask to exclude that, so that I can avoid getting notified every time someone walks past outside.
The integration with Home Assistant is also good. There are two components to it – an add-on and a custom component. The add-on is the main Frigate system that connects to the coral device and your cameras. If you run Home Assistant on a dedicated device with the Coral TPU attached, you’ll need the add-on.
If you run Home Assistant in a virtual machine, as I do, then you’ll need to run Frigate on the underlying OS or even a completely separate machine. In theory it should be possible to pass the USB hub to your VM, but the docs don’t recommend it. I run Frigate as a Podman container on the host that runs my Home Assistant VM.
The base Frigate system has a user interface of it’s own, where you can view your cameras and clips:
The real strength, though, comes from the Home Assistant integration. Whether you run Frigate as the Home Assistant add-on or on a separate machine, you’ll need the custom component. This connects to your Frigate instance and provides sensors, configuration switches and camera feeds into Home Assistant.
The switches are used to dynamically change your Frigate configuration by toggling detection, snapshots, and recording for each camera. I particularly like the separation of detection and recording – it means that you can use the cameras as motion sensors without always storing clips.
You can, for example, turn the clip recording off for some of the cameras when you’re at home but still use them for motion detection. The motion detection acts exactly the same way as any other motion sensor, so you can use it for things like controlling lights without also having to record video.
The live camera feeds can also be shown in Home Assistant as long as you’ve assigned the “rtmp” role to one of the inputs. Video clips are available via the Media Browser within Home Assistant.
It’s a little clunky, as you have to navigate from your media sources to Frigate and then to clips. I’d really like to have a separate panel that links directly to my clips, but that’s a limitation of Home Assistant’s media browser.
So how good is the Coral.ai’s object detection? I’m running it against low resolution feeds that are typical 640×360 pixels running at 5fps, and I’ve yet to have a false positive. It’s obviously possible that it’s missed something, but it’s reliably picked up motion during the day and at night. It’s good enough that I no longer bother with 24×7 recording. I hardly ever need to view Frigate’s own web interface – everything I need is within Home Assistant.
Next time I’ll go through my Frigate setup in a bit more detail, with some examples of the automations that I’m using for notifications.in Home Automation