Git Product home page Git Product logo

bobby-home / bobby-home Goto Github PK

View Code? Open in Web Editor NEW
4.0 2.0 0.0 27.56 MB

:eyes: Bobby is your open-source alarm. Leave your home with a peace of mind. Still running, but a full rewrite (Go) ongoing.

Home Page: https://doc.bobby-home.com/

License: MIT License

Shell 5.43% Python 72.96% JavaScript 2.86% Dockerfile 2.20% HTML 5.07% Makefile 0.48% SCSS 3.52% Vue 0.98% CMake 0.83% C++ 5.67%
home-automation home-security python security-cameras alarm raspberry-pi camera

bobby-home's Introduction

Bobby Home

Your open-source alarm. Leave your home with a peace of mind.



Introduction

Bobby Home is an open-source software to protect your home from burglars without compromising your privacy.

It is primarily designed to run on a Raspberry Pi with PiCamera which makes the installation affordable.

This project is built from the ground up for simplicity, for developers and users, with a strong emphasis with your privacy first.

Why?

  • We all care about our homes and sadly burglaries could happen. That is why more and more companies sell alarms which are sometimes very expensive with very bad software in term of stability, user experience. Bobby is affordable and the software is so easy that your grandparent can use it.
  • πŸ‘€ Privacy matters. Alarm systems introduce cameras inside your home which could cause dramatic privacy flaws. With Bobby your data belong to you and only you, all the data is only stored and processed locally, on your Raspberry pi! [1].
  • πŸ’ͺ You can extend Bobby to connect it with all your IoT devices. Do you want to turn some lights on when a motion is detected? Close your stores? You can do anything you want thanks to Automations.
  • πŸ‘ Open-source is great. You can control and contribute to your alarm to improve it.

[1] The only exception is Telegram which can be used to receive pictures and videos if motion is detected. But Telegram complies with our privacy rules.

Hardware

  • Raspberry Pi 4 or 3 with at least 1gb of RAM. We recommand to have a Raspberry Pi 4 with 2gb of RAM.
  • PiCamera (or compatible camera). We use the picamera library to manage the camera so make sure your camera can be controled by this library.
  • Raspberry Pi Zero if you want to bring remote cameras. For instance I have my Raspberry Pi 4 in my living room (with a camera) and a Raspberry Pi zero with a camera to monitore my courtyard.
  • Any SD cards with a decent write/read speed otherwise your system will be slow with a capacity of 16gb at least. We recommand you to go with 32gb or even more if you feel it.
  • A strong power supply for any of your Raspberry Pi. Go to their website and check their recommendations.
  • πŸ”‰ Any speakers with jack connector if you want to make noise when people are detected. ⚠️ Don't power your speakers through the Raspberry Pi usb port it could lead to annoying noise. Go for a little power supply.

πŸš€ What is Bobby able to do?

Bobby is able to detect if somebody is present through PiCamera. Then it will:

  • Send telegram message to alert you with a picture taken when the people has been detected and a video 10 seconds after the beginning. Then it will send you every minute a video until the people left or you switched off the alarm.
  • (if created) Call Actions linked to Automations so you can interact with your IoT devices (turn your lights on...).
  • (if enabled) Play scary sound through Automations. We provide a service to play sound through speakers directly connected to your Raspberry Pi.

You can manage your alarm status by device through the following options:

  • Web interface.
  • Telegram bot through the command /alarm.
  • Autopilot: create schedules to automatically turn off/on your alarm.

πŸ‘· Status

This software is currently under active development.

It is currently deployed in 3 homes with a total of 5 cameras running without main issues.

πŸ“š Documentation

For developer documentation, visit doc.bobby-home.com.

Motivations

At the beginning, I wanted to secure my home because a lot of burglaries happened around in 2019-2020 near to me. On the market today, we can find a lot of security cameras supposed to be intelligent and so on but I have a main concern: my privacy. I analyzed some security cameras, and I found some really bad things. For alarm systems sold by companies are expensive but still delivering a bad user experience. For example, my neighbors had a security system triggered by their dogs every night, so they lost almost a thousand euros.

So I decided to create everything myself. Sure a lot of open-source software exists, but they don't answer my need. I want something simple and these tools are badly designed, I could not understand easily the software to make my thing.

If you look at the code source of Bobby you will see that it's small and comprehensive but yet the system is powerful and fulfills the requirement of an alarm.

Then the project grew and here it is! I decided to go fully open-source.

bobby-home's People

Contributors

dependabot[bot] avatar mxmaxime avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

bobby-home's Issues

refactor: telegram bot in one place

We cannot have two instances of the bot in different places.
My idea was to add a service to run a django standalone script, to allow residents to turn on/off the alarm. On the paper it was great but I can't... 🚫

telegram_bot_1         | 2020-08-15 16:27:06,370 - telegram.vendor.ptb_urllib3.urllib3.connectionpool - WARNING - Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError('<telegram.vendor.ptb_urllib3.urllib3.connection.VerifiedHTTPSConnection object at 0xb47e6358>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /bot1017271700:AAF9IG-AIMHGAuM_eS_l4Ft4F0yVWbdN3H8/getUpdates
telegram_bot_1         | 2020-08-15 16:30:31,381 - telegram.ext.updater - ERROR - Error while getting Updates: Conflict: terminated by other getUpdates request; make sure that only one bot instance is running
^CGracefully stopping... (press Ctrl+C again to force)
Stopping webapp_telegram_bot_1 ... done

Setup IoT status via MQTT message

Use case

One rpi 0 is a security station and it reboots. When it joins the MQTT broker, it needs to know whether or not the alarm is on in order to run piece of code accordingly.

Technically

❎ At the beginning, I was using an HTTP get request to get the status. (i.e get /alarm/status). But... That works for rpi's for instance but not for small IoT devices.

❎ I was thinking to implement something to do request/answer. A device request the status by publishing something, and a device answer to it by publishing to another topic. Well, that doesn't feel good and more importantly, I won't have many many devices restarting every hour, it's a "special" use case for some kind of devices... For instance, some device won't need some status to initialize. Installation instructions, which doesn't work...

πŸ€” In my django script to handle MQTT, I can have a callback on_connect, and when a new device is connected, I can publish the status. I would like to know what topic this client is following to send it only what he needs, and not all the status, but I don't find a way to do this.

⚠️ I can't target one device, so every device will get the status when a new device is connected.

feat: add device_id data when publish a picture

I already have this kind of issue, described here: #44

When we detect a motion thanks to a camera, publish:

  • one message to say "hey, a motion is here" with the device_id data.
  • one message with the picture as bytearray, without the device_id data because I didn't find a solution to put this data with a bytearray.

But, if this picture is not associated with a device_id, this lead to a lot of troubles. To associate the motion event with the picture, we will have do to some hacky stuff with the date/time +- 5s for instance.

But the real problem will come when we will develop the feature that allows little devices (pi 0w, esp...) to send pictures frequently, to let one pi 4 do the processing. As we won't know where the picture came from, it will be a mess:

  • ROI won't be associated with the camera... So we loose a lovely feature.
  • We don't know if the device is on "security" or "watch" mode.
  • We don't know where the device is located to notify the user "hey, motion is detected on the Maxime bedroom"
  • ectect

If I want to fix this easily, I can create one topic per device with the device_id. So, the device publish on blabla/device_id and the "broker" have to listen every topic like this.

It seems right to me, because when we are sending a picture, we only want the main PI to process it.

As a resident, I want to receive a picture when a motion is detected on a camera

Technically

The device that will take the picture can be a rpi 0 for instance. It will notify the main rpi that a motion is detected (via Celery task). But then... I need to retrieve this picture to send it to the residents. I thought to use rsync or something like but... I'm using MQTT to send the notification "hey, motion detected here!", why not sending the picture via MQTT? Is it possible?

⚠️ publishing the picture as payload could crash, but I think I'm okay for pictures.

A ValueError will be raised if topic is None, has zero length or is invalid (contains a wildcard), if qos is not one of 0, 1 or 2, or if the length of the payload is greater than 268435455 bytes - https://pypi.org/project/paho-mqtt/#publishing

improve: stop video streaming to avoid resource lock

When we stop the video streaming it is because the alarm is off. When the alarm is off, we switch off a python process. The issue today is that we don't release any resource like the picamera and the in-memory stream.

Error saving in-memory database eclipse-mosquitto

mqtt_1                 | 1595765174: Saving in-memory database to /mosquitto/data/mosquitto.db.
mqtt_1                 | 1595765174: Error saving in-memory database, unable to open /mosquitto/data/mosquitto.db.new for writing.
mqtt_1                 | 1595765174: Error: Permission denied.

feat: holiday mode

As a resident, I want to define when I am on vacation so my house has to be monitored.
Ex: "I'm out from [date] [hour] (up the alarm) to [date] [hour] (up the alarm)."

Bug: FreeCarrier send « None » message

When we are sending a message, sometimes we only send a picture without message. In this case, the FreeCarrier send "None" in a sms to the resident because the "message" parameter is set to None.

To fix this, just do if message is None then do nothing.

We will have to do the same thing for all messaging transports that can send only message.

improve: don't use ENV variable to define mqtt topics

We used MQTT_ALARM_CAMERA_TOPIC variable environment for the alarm topic. It's a very bad design and should be replaces. For now, we will go with hard coded values as string for topics. Later we will design a strong system because it is error prone and this part of the software is very sensitive.

Alarm can't send publish MQTT

Since this PR #23 we have an issue. When a motion is detected, we're publishing messaging. It doesn't crash, but it doesn't work either.

It may be related to this commit. We're using only one MQTT client instance, even in the thread of the camera.
The same mqtt client is used to:

  • Turn on/off
  • Alert (in another thread)

To try: 2 mqtt clients.

Collect & aggregate docker container logs

We won't use Elasticsearch database because it's way too heavy & not developer friendly to implement.

I suggest we will use fluentd with Loki & Grafana. It seem's to be easy to setup and lightweight.

Please see related PR.

Telegram bot commands use Celery task instead of Rest API

At the beginning of this project I was using/implementing a RestAPI and I was using it to let the telegram bot turn on/off the alarm system. I have decided to abandon this rest api (at least for now), so I have to change my telegram bot.

  • Remove rest API endpoints

Let the user schedule alarm on/off

User story

As a resident, I want to be able to schedule: when my security station is on or off

Technically

We're already using Celery, so we want to use Celery to handle periodic tasks.

The default scheduler is the celery.beat.PersistentScheduler, that simply keeps track of the last run times in a local shelve database file. There’s also the django-celery-beat extension that stores the schedule in the Django database, and presents a convenient admin interface to manage periodic tasks at runtime. - Celery Periodic Tasks documentation.

So this extension seems to have everything I need: be able to schedule things via the Django database, and bonus! We'll have an interface to manage them. We're gonna to test this piece of software to see if it matches our needs.

So we will create a Task to turn on the alarm system (by sending mqtt message), and the resident will be able to schedule it.

bug: cv2 module not found

When I do run my docker contain that uses opencv python binding it does not find the "cv2" module.
Since

In the container, I see this broken link:

(core) ❯ docker run -it a029d085cab4 /bin/bash
root@075ed65cccc1:/usr/src/app# ls -l /usr/local/lib/python3.7/site-packages/ | grep cv
lrwxrwxrwx  1 root staff   48 Sep 10 16:14 cv2.so -> /root/opencv/build/lib/python3/cv2.cpython-37.so

But we should have something like cv2.cpython-37x86_64-linux-gnu.so or something else for arm.

Feat: send picture to analyze if people(s) is here

The PI zero/ESP can't process the program to detect if people is on a picture or not.

In the system, we need this kind of devices to keep it affordable: if each camera security is backed by one RPI 3 or 4, it's expensive.

So we need to send pictures, frequently, to a PI 4 which will process the picture. We can send these picture via MQTT.

This will allow us to have multiple security cameras.

Technically

We will have multiple "low" devices that will send pictures to be analyzed. But, one PI3/4 won't be able to process all of these pictures, we will need to distribute the work across multiple PI3/4.

As we use Celery, I'm thinking to use Tasks to analyze these pictures. But I'm wondering if this is heavy: every x seconds we will start a task with tensorflow... Isn't heavy to initialize tensorflow?

The other idea is to have a process that would analyze pictures that is coming. But the issue with this is... The distribution, I would need to do it manually, doesn't seem's very good.

Proposals

Either we can do the processing withing the "webapp" application, by creating a new docker container.

  • the "webapp" receives pictures, analyze them and say directly if a motion is being detected or not (database..).

Or... we could create a new docker container in the smart-camera.

  • Same, but then it communicates (though mqtt) if a motion is being detected or not.

Why the second option? Well, because in this application, we already have everything that we need to analyze frames and to say to the system if a motion is here with all the related logic: threshold, multi processing, tensorflow model, mqtt related things...

If we move the whole thing in the "core", it would duplicate a lot of things and it would be a nightmare to maintain.

feat: define area to watch on security camera (ROI)

User Story

As a resident, I want to define one area to watch on my security camera.
Use case: I do not want some area of my camera view to be monitored. So, I define area to be watched.
For instance, in front of my house I have my cars and my entry gate. I do not want to secure my entry gate (somebody can come of course!), but I want to monitor if people is approaching my cars.

Technically

For the first release of this feature, we will define ROI (region of interest) by rectangles.
The area shape have to be closed to be valid.

How the camera knows where to "watch"?

ℹ️ The camera will watch the whole frame. But we will check if detected peoples are in the ROI. I don't know if I can "crop" the frame to analyze only some parts of the frame, I'll see that later as this is "just" a performance booster, it doesn't provide value for the user.

Well,

  • ❌ for dumb cameras they won't as they have minimal power. The ROI will be used in some kind of Celery task and will have access to the database to get the ROI associated with the device. This is why we need to know the device_id when a device send a picture, otherwise we couldn't perform such operations.

  • βœ”οΈ for smart cameras, we will send the ROI at the beginning when we notify to boot. So it will be done via MQTT via JSON. Basically, we want these cameras to perform everything by them-self as they have the power required.

πŸ’­ Thanks to PR #87 we have a nicer way to send status to a specific device and service. But I don't think it's a good idea to use the "status" topic to send configuration. Configuration will be send using string containing JSON, status will be only boolean as bytes. But... well if my camera service needs some data to actually work, using 2 different topic can be a pain.

My original concern was if low devices wanted to know the status of a specific service/device. If so, the device would need to parse a JSON, and that is not a cheap step.

How to save rectangles?

Well, a Rectangle can be defined with:

  • a point (x;y)
  • a width
  • a height

⁉️ Ok, we have to take a special care because these information can be defined on a picture with a certain size, let say 1080p format, but the processing work with a smaller size for performance reasons. We have to do a mapping between the defined format and the processing format.

βœ”οΈ We have to save the definition of the picture that we used to define rectangles. Thanks to this data, we can scale shapes.

Opencv check

  • fist, we need to know where the people has been detected with TensorFlow.
  • we create an opencv contour via the rectangle data
  • we can use to check whether a point is in this contour. But... we will have to do it for the 4 points of the rectangle where people has been detected. At least one point is in. ⚠️ what is going on if we have an overlap? I mean, imagine points (x;y) are on the ROI, that can lead to a false negative! It's almost impossible, but can happen...

πŸ’­ In order to use pointPolygonTest function we need a contour and a point. The point will be given by the People.bouding_box. But what about the contour? This is basically our ROI, but we don't have an opencv contour, and I don't find how to build one from 4 points... -> found it, I've created a function to create contours from points, source.

⚠️ It is more complicated than checking whether or not a point is in a contour!
Let's take a look of these examples:
image

So maybe I will have to do something like this:

  • is there any points of the bounding box inside the ROI?
  • is there any points of the ROI inside the motion bounding box?
  • is there an intersection between the two rectangles? See this stackoverflow answer. with answers, and this topic from opencv forum which presents (I think), the same method.

Order points for opencv

We need points to be ordered clock-wise to create contours. I create it by hand, but I can use this function to do it. Works pretty well.

⚠️ Today we just want to know if someone is detected. But now we can have one people out of the ROI and at the same time somebody in the ROI -> so motion is detected bring the alarm!

Image to Bytes

OpenCv to bytes:

import cv2

im = cv2.imread('test.jpg')
im_resize = cv2.resize(im, (500, 500))

# encode the Numpy ndarray in the specified format.
# im_buf is the the encoded image in an one-dimension Numpy array.
is_success, im_buf_arr = cv2.imencode(".jpg", im_resize)
byte_im = im_buf_arr.tobytes()

# or using BytesIO
# io_buf = io.BytesIO(im_buf_arr)
# byte_im = io_buf.getvalue()

PIL image to bytes:

import io
from PIL import Image

im = Image.open('test.jpg')
im_resize = im.resize((500, 500))
buf = io.BytesIO()
im_resize.save(buf, format='JPEG')
byte_im = buf.getvalue()

feat: record a video when people is detected

When a people is detected, we start to take a video, then when no people is detected anymore we stop it.

prerequisite:

Proposals

Videos

We might record a video for the first x seconds (e.g 30 secs), to send to the resident quickly, and still record the whole video (until no more people are detected).

For the webapp, when we receive a notification of motion, we schedule a job in x seconds (e.g 30 secs), to retrieve the video, turn it into mp4 and send it to Telegram. ⚠️ If we detect a motion for less than x seconds, then the job should be canceled, or just do nothing because the video will be processed by the "no more motion" event. Indeed, when the webapp receive the "no more motion" event, it will process the video, and send it to Telegram if it's not too huge (too huge to define). This task might have to process 2 videos: the little and the whole.

To avoid to create tasks, and to synchronize camera software and webapp (e.g on the x seconds, 30 seconds), we can put the responsibility to the camera: when it split record, it send a mqtt message. Then the webapp receive it, retrieve the video and process it.

Benefits:

  • The resident get a video quickly.
  • We save quickly a video to the cloud. If the burglar decide to break the RPI, we still have a video.

How the webapp service will receive videos

We have to think how to send videos between different devices: PI 0 & the web app on PI4 for instance. For pictures, we use MQTT, it won't be possible for videos, this is too heavy.

  • use rsync or something? As the PI 4 will be able to connect to every PI's on the network (#11). But it won't work for devices that doesn't have linux or something, like ESP devices. One question: do we really want to take videos when it's on a ESP or similar?
  • use http. Is it suitable?

The main issue with HTTP is that, it is the responsibility of the camera software to send the video. If the service crashes, or get closed, we have to make sure that the video is sent. Thanks to the MQTT connectivity, if something goes wrong, the webapp will still have mqtt messages to react to: no more motion, disconnect. Even if the webapp goes offline, it will receive mqtt messages when it will be back.

Two possible strategies:

  • Camera software send videos to the webapp.
  • Webapp retrieve videos (thanks to mqtt messages that it will react to).

Issue with shared Docker Volume.

The smart-camera produce a video, and then send a MQTT message to say to the python_process_mqtt to create a Celery task to process the video. But! The worker can't find the video.

FileNotFoundError: [Errno 2] No such file or directory: 'MP4Box -add /usr/src/videos/f4e45230-d02e-4504-a972-eac5957c973f-0.h264 /usr/src/videos/f4e45230-d02e-4504-a972-eac5957c973f-0.mp4'
--

I checked if the video was in the folder, and it is:

docker-compose run rabbit_worker /bin/bash
user@675bdcfb87f8:/usr/src/app$ ls /usr/src/videos/f4e45230-d02e-4504-a972-eac5957c973f-0.h264
/usr/src/videos/f4e45230-d02e-4504-a972-eac5957c973f-0.h264

Even weirder, when I execute the command manually, it works:

MP4Box -add /usr/src/videos/f4e45230-d02e-4504-a972-eac5957c973f-0.h264 /usr/src/videos/f4e45230-d02e-4504-a972-eac5957c973f-0.mp4

AVC-H264 import - frame size 640 x 480 at 25.000 FPS
AVC Import results: 151 samples - Slices: 3 I 148 P 0 B - 0 SEI - 3 IDR
Saving to /usr/src/videos/f4e45230-d02e-4504-a972-eac5957c973f-0.mp4: 0.500 secs Interleaving

I tried to execute whoami inside the call(), it returns user as expected.

πŸ†— The issue was the python subprocess call(). I don't know why, but it doesn't work, even a simple ls /usr/src/videos was not working. I switched to os.system() with the exact same command, and it worked.

Send to Telegram

Telegram needs mp4 videos but PiCamera produces raw h264 videos. So we need to convert videos before we can send it.

The Pi captures video as a raw H264 video stream. Many media players will refuse to play it, or play it at an incorrect speed, unless it is "wrapped" in a suitable container format like MP4. The easiest way to obtain an MP4 file from the raspivid command is using MP4Box.

To convert videos, we will use MP4Box -add video.h264 video.mp4 from the package gpac.
Sources:

Celery job to process video

⚠️ we might have issues with python subprocess call() method inside a Celery job. We do it to executte MP4Box, to transform the raw h264 video to mp4.

Sources:

πŸ†— Finally we used os.system but still, we could got this issue.

Retrieve the whole video

For the small video, it's not an issue because we send mqtt message. Same for the end of a motion. But what if the resident stop the alarm when a motion is recording... Well, I think that the alarm should not turn off if a motion is on. -> only for schedule. Thanks to this thoughts, I open a new issue #130 to improve the turn off feature. We have a flaw in the process.

Well, to avoid this issue, we have some options:

  • in the smart-camera, send mqtt message before the terminate() call.
  • in the web app, if we receive a disconnect message from the alarm, try to retrieve the video if we don't have it. It feels hacky to me... πŸ€”

feat: as a resident, I want to stream my cameras

Ideas

MQTT over websockets

ℹ️ Mosquitto does not support websockets protocol by default. section "protocol".

Websockets support is currently disabled by default at compile time.

PiCamera proposal - simple http stream of pictures

I can do a steam using django StreamingHTTPResponse. Stackoverflow that explains it. and raw http.

HTTP stream

It might be a solution, but it is almost the same thing as the previous idea? I need to check it.
https://dev.to/nwtgck/the-power-of-pure-http-screen-share-real-time-messaging-ssh-and-vnc-5ghc

❌ RTSP protocol

Could be a good idea because it seems that many IP cameras use this protocol, but it seems complicated to setup. I would need something like gstreamer and some python code to do the bridge... This adds new software to the stack, a lot of code and I'd like to avoid this.

❌ WebRTC with puppeteer

The idea is to use WebRTC with puppeteer:

  • On my RPI I open chrome with puppeteer. I go on a page that request access to camera & stream it via WebRTC.

❌ Because this solution will be too complicated to implement and I don't even know if it will work. To use this, it will introduce a new language in the stack (JS with NodeJS)...

On the paper it looks good. But I think I'll have many issues.

  • WebRTC isn't that easy.
  • Puppeteer can't access CSI camera. Maybe related to Chrome.

feat: build the opencv docker image for different arch

I've built this image on a ARM raspberry pi. So, the docker image can't run on my pc. This is an issue, because today I don't have access to the PI, and I still want to develop the project. So yeah, I will have to build the image for multiple arch.

The issue came when I've tried to run the "py" service of the "attachments" app. The image has been build on a PI, so on my computer I got this error:

standard_init_linux.go:211: exec user process caused "exec format error"

Which is far from understandable... I found the issue here.

Resources

https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/

⚠️ don't forget to enable experimental feature on your docker, then restart the service.

I'm using the buildx (experimental) docker tool.
I had to create a driver and use it:

docker buildx create
# that created the driver "epic_austin" that we are going to use:
docker buildx use epic_austin 

Current

I can't build for arm/v7, I have found an issue talking about the exact same problem. I have to follow instructions.
Here is the error that I got:

failed to solve: rpc error: code = Unknown desc = failed to load LLB: runtime execution on platform linux/arm/v7 not supported
❯ docker buildx ls
NAME/NODE      DRIVER/ENDPOINT             STATUS  PLATFORMS
epic_austin *  docker-container                    
  epic_austin0 unix:///var/run/docker.sock stopped 
default        docker                              
  default      default                     running linux/amd64, linux/386
❯ docker buildx inspect --bootstrap
[+] Building 0.9s (1/1) FINISHED                                                                                                                     
 => [internal] booting buildkit                                                                                                                 0.9s
 => => starting container buildx_buildkit_epic_austin0                                                                                          0.9s
Name:   epic_austin
Driver: docker-container

Nodes:
Name:      epic_austin0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/amd64, linux/386

Install ...

I've followed the instructions from the github readme.

docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

Don't forget to restart docker!

Then you'll be able to tun containers with different architectures, so you'll also be able to build your images for different archs.

❯ docker buildx ls                                                      
NAME/NODE      DRIVER/ENDPOINT             STATUS  PLATFORMS
epic_austin *  docker-container                    
  epic_austin0 unix:///var/run/docker.sock running linux/amd64, linux/386, linux/arm64, linux/riscv64, linux/ppc64le, linux/arm/v7, linux/arm/v6
default        docker                              
  default      default                     running linux/amd64, linux/386

Here an example of how to build a multi arch image:

docker buildx build --push --platform linux/arm/v7,linux/amd64 --tag mxmaxime/rpi4-opencv:latest .

Final command

docker run --rm --privileged multiarch/qemu-user-static --reset -p yes && \
sudo systemctl restart docker && \
docker buildx build --push --platform linux/arm/v7,linux/amd64 --tag mxmaxime/rpi4-opencv:latest .

feat: monitore who is on the network

This will unlock powerfull features like:

  • If [list of devices] are not connected, then notify me to up the alarm because I may forgot. Or just switch on with an extra condition: nobody is detected on every cameras.

fix: trailing slash for mqtt topics

If we publish message on the topic /security/sound and we listen on the other hand on the topic security/sound, then the message is kind of lost because nobody is on the line.

I just lost 3 hours to debug this stupid error!

Let the user choose how she wants to be notified

We have a few notification "transport" (things that can notify users). For instance, we have free (french) carrier to send sms, Telegram, in the future Twillo, and so one.

Note that, for free carrier api we do not need to save the phone number, it's done by their service because the phone number it's linked to the credentials. So we won't talk about creating something modular to send sms for this feature yet.

First of all, the house may not use all of these transports, for instance the project allows me to notify my residents with Twillo, but as I'm a free carrier I want to use it, I don't want to setup a Twillo account.

The resident has the choice between things that has been setup for its house. Then, of course, she needs to configure the transport for its account. For Twillo it's the phone number, for free carrier it's their account credentials...

Then in my notification things, I have to check for all my users on which transports I need to notify them. It could be for instance, via Telegram and sms.

Technically

First, I thought to use a Generic foreign key to link my UserNotificationSetting model to others model, like UserTelegramSetting which contains chad_id linked to a User. Why? Because if I'm using a Generic foreign key, I don't have to modify the UserNotificationSetting when I add some notification transports. But... that leads to bad database design because it's not clear of what's going on, we loose foreign keys and so one.

So, I'm thinking of using one foreign key for each notification transport.

Think for future features

Imagine I want to be able to snooze notification for a particular notification support. To do so, I need to register one entry for each notification support. So, in the model, I have to restrict/validate to have only one foreign key field filled.

thinking: on motion detected save picture & ref in database

The feature was introduced by the PR #46

What to save in database

Currently we are saving the full path to the image. Ex: /usr/src/app/media/d47eb8b4-02f3-4d3d-ab68-8f7a40385b84.jpg
I think it can be problematic if we want to backup these files on something else than local disk (amazon S3 for instance).

  • Do we need to only save the file name? And then, it can be retrieve by the system who knows how to locate a file based on the filename?
  • How can we know how the file is being stored? S3 or local or something else?

feat: take picture from a specified device

As a resident, I might want to take/receive picture from a specified device. I can stream my cameras, but what If I have a low connection? Or I might just want a simple picture, easier than a video streaming, quicker...

Technically

Remember that, by design we don't include any device_id in the mqtt topic. So here, as we want to target some device(s), we have to put this information in the json payload. By writting this, I don't know if this is a "good" solution. It might be better to add specific topic with the device_id in it, for instance: camera/take_picture/<device_id>. By doing so, we move the "pain" in another area: If we want to take pictures from the "outside" for instance, we will have to publish multiple message on different topics. This is a trade-off...

feat: create different modes for my cameras

User Story

I have a front door camera, the day I want it to detect people but the night I want it to act as security alarm station.

Technically

I do not want to define these kind of specifications on the camera device/program. Its job is to detect if somebody is here and send mqtt message.

Then, the main program receive the mqtt message "hey, somebody is here" with the device_id (used for localisation). Now we can decide what to do, is the system on security mode? Only detect people? And what to do accordingly. This mode is set on device level to be able to:

  • Says that my outdoor front camera is on detection mode (notify me and that's it).
  • And in the same times, says that my inside cameras are on security mode if I'm not here.

Decide what to do: call it scenario.

Thinking

But if I do this, do I need to have two separate entities: alarm and camera. Alarm loose its meaning πŸ€”
I would have a camera, which:

  • detects or not: object detection link to it.
  • stream or not

And alarm schedule would become "camera schedule" to update is_detection.

improve: alarm schedule detect overlap

Imagine the resident defines two schedules:

  • 8am to 5pm -> up. So, at 5pm the alarm is going to be turned off.
  • 4pm to 6pm -> up. So, at 4pm the alarm is going to be turned on, but it will be already up. No real problem here.

But, from 5pm to 6pm, the alarm will be off despite the configuration, because we have an overlap.

Technically

To detect overlap, when we insert a new row, we have to check, for every day:

IF start < another_end AND end > another_end THEN
overlap()
END

Let the resident define what he/she wants to do when motion is detected on a device

User Story

As a resident, I want to be able to configure what the system is doing when a motion is detected for a particular camera.

With this feature, we will be able to leverage the power of the defect movement feature.

As requested by Alexandre (a resident), he wants to know when somebody is in front of this front gate. For instance, send a Telegram notification.

But for some other cameras, he wants to be notified and tells the people in front of the camera to leave (alarm!). So, instead of duplicating the code... We will add the possibility to configure what to do when a motion is detected for a particular device/location...

I see another huge feature: be able to configure what to do based on condition: basically the time and/or alarm status... The idea behind this, is to be able to have a camera that notify me if someone is in front of my door the day (it can be a kind delivery guy πŸ“¦), and ring the alarm the night! πŸ”Š

feat: handle backup pictures

Today, when a people is detected a picture is taken and stored on the web app local "media" folder (handled by Django). This is totally fine for the first release because we won't have a tone of pictures, and thus pictures are sent to Telegram, so they cannot be lost (very very low probability).

But, I would like to have some storage policy. Maybe, after x days go to an archive S3, or simply delete. I don't know, and I think we have to let some choices to the resident.

Nextcloud

We could upload files to a nextcloud (as an option). That could be nice, as NextCloud is an open-source project.
For now, I have these ressources:

feat: play sound when people is detected & stop it on conditions

When a motion is detected, the system has to ring the alarm, basically "play a sound" for now.

When a motion is detected, we publish a message via mqtt, and the web app receives it.
This app have to publish another mqtt message to ring the alarm.

We should send this message via the web app because, in the future, we might add features like (when a people is here, just notify the resident, so don't ring the alarm, feature #38).

Then, when no motion is detected we publish a new message to stop the sound. It's not the same message as we might add some extra logic to stop the sound as:

  • wait a little bit to stop the sound.
  • give to the resident the possibility to stop the sound...

⚠️ It's a sensitive feature!

  • The sound has to stop when the alarm is switching to off. Don't even search if it's ringing, just publish to stop.
  • The sound has to stop after X seconds when a people was detected and then he's gone.

feat: Ansible to deploy everything

We use Docker & docker-compose files. But we need to set up the hosts. To do so, today we have a bunch of scripts/makefile to run. This is far from ideal. Everything can be automated with Ansible. We will keep our little script to do some job, but we will use Ansible to automate the script execution to simplify the installation process, which is error prone.

Motion detected even if nobody is here

I'm using a NoIR PI Camera, and sometimes when it's a little bit dark, I receive notification about a motion but... nobody's here.
I think I'll check the motion detection algorithm, it's there since Decembre 2019 without being really tested.

Ideas

Histogram of Oriented Gradient Descriptor

HOG for short is used in computer vision & image processing to detect objects. OpenCV has already been implemented a lot of stuff around this technique
Maybe we can use this to detect humans. I've read that, this technique is very heavy when processing videos, but I can process my PI camera at a very low FPS so it's like image processing. I don't care about tracking the person, knowing where she's in real time, I just want to know when somebody is here.

❎ Bad performance. I've tested a lot of situations, and frequently peoples aren't detected, which is pretty bad for a security system!

Tensorflow object detection api

❎ Tensorflow is hard to use on the PI. I want to use Tensorflow 2 and it's not available as pip packages. I've tried many many things, compile with docker on my computer and so one, nothing worked. Then I thought to use Tensorflow lite. We may have a little bit less accuracy, but I think it will do the job.

πŸ§‘β€πŸ­ I'm working on object detection with Tensorflow lite. I've followed the installation instructions, and πŸŽ‰ it's working! I now have to test object detection api.
I will test with this exemple provided by Tensorflow.

I got this error from PIL lib:

ImportError: libopenjp2.so.7: cannot open shared object file: No such file or directory

That I fixed by installing this:

sudo apt-get install libopenjp2-7

Then I got this error, also from PIL:

ImportError: libtiff.so.5: cannot open shared object file: No such file or directory

That I fixed by installing this:

sudo apt-get install libtiff5

Found these fixes on stackoverflow.

Motion detected message + picture in one mqtt publish

Limitation

When a motion is detected, we want to send the picture where it was detected. We use mqtt to send the picture as bytearray because we cannot send the picture with a JSON. We could convert the image as base64 to be able to send it with JSON, but it would be very inefficient. Basically, we can't send the bytearray into a JSON, or at least I didn't find the solution.

The issue

Otherwise I send two messages, one to say "someone is here" and one to send the picture. It's fine, but the first one create a database entry and the second one modifies it to add the picture path. We can have race condition here... The order can't be guarantee.

Or I can simply retry the second task if we don't found the database row. https://docs.celeryproject.org/en/stable/userguide/tasks.html#retrying

The solution

I have decided to change the workflow. When a picture is coming, we are saving it on local drive (and another in the future maybe...) and we are saving this file path in database without any link to motion detected. To find pictures "associated" with a motion detection even, we can select pictures based on the date +- 10s for instance.

Otherwise, to link the picture with the event, we would have this kind of code in the task:

@shared_task(name="security.camera_motion_detected")
def camera_motion_detected(device_id: str):
    try:
        camera_motion = alarm_models.CameraMotionDetected.objects.filter(picture_path__isnull=True).latest('created_at')
    except alarm_models.CameraMotionDetected.DoesNotExist as exc:
        raise self.retry(exc=exc)
    else:
        camera_motion.picture_path = picture_path
        camera_motion.save()

        messaging = Messaging()
        messaging.send_message(picture_path=picture_path)

First, it's ugly, just look at the except! And secondly, it's not reliable, we could associate a picture with the wrong event in the case where two (or more) devices detect motion and send pictures.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.