Git Product home page Git Product logo

thespaghettidetective / obico-server Goto Github PK

View Code? Open in Web Editor NEW
1.4K 1.4K 290.0 248.11 MB

Obico is a community-built, open-source smart 3D printing platform used by makers, enthusiasts, and tinkerers around the world.

Home Page: https://obico.io

License: GNU Affero General Public License v3.0

Dockerfile 0.04% Python 27.78% HTML 7.57% JavaScript 23.90% Shell 0.24% Vue 37.29% SCSS 2.58% Sass 0.03% CSS 0.58%

obico-server's Introduction

The Obico Server

This repo is everything you need to run a self-hosted Obico Server.

Obico is a community-built, open-source smart 3D printing platform used by makers, enthusiasts, and tinkerers around the world.

The AI failure detection in this project is based on a Deep Learning model. See how the model works on real data.

Install and run the server

Note: For more detailed instructions, head to the Obico Server guide.

Prerequisites

The Obico Server only requires a computer to run. Even old PCs (within the previous 10 years) will do just fine. An Nvidia GPU is optional but can vastly increase the power consumption and beef up the number of printers the server can handle.

Detailed hardware minimum specs.

Software requirements

The following software is required before you start installing the server:

  • Docker and Docker-compose. But you don't have to understand how Docker or Docker-compose works.
    • Install Docker (Windows, Ubuntu, Fedora, CentOS, Mac). Important: If your server has an old Docker version, please follow the instructions in these links to upgrade to the latest version, otherwise you may run into all kinds of weird problems.
    • Install Docker-compose. You need Docker-compose V2.0 or higher.
  • git (how to install).

Email delivery

You will also need an email account that has SMTP access enabled (gmail will not work. As of May 30, 2022 Google has removed the option for allowing SMTP access). Other web mail such as Yahoo should work but we haven't tried them.

Get the code and start the server

  1. Get the code:
git clone -b release https://github.com/TheSpaghettiDetective/obico-server.git
  1. Run it! Do either one of these based on what OS you are using:

    • If you are on Linux: cd obico-server && sudo docker compose up -d
    • If you are on Mac: cd obico-server && docker-compose up -d
    • If you are on Windows: cd obico-server; docker-compose up -d
  2. Go grab a coffee. Step 2 will take 15-30 minutes.

  3. There is no step 4. This is how easy it is to get Obico up and running (thanks to Docker and Docker-compose).

Open "http://localhost:3334" on the same computer. Voila - your self-hosted Obico Server is now up and running!

Login page

Server Configuration

Upon fresh installation, the Obico Server can only work on the localhost. You will need to configure its IP address. Here is how:

Obtain server's IP address

Recommended Read: Connecting to your server with a .local address

This refers to the LAN IP address that has been given to the computer that the Obico server is running on.

  • If you are on Linux: Open the wifi settings and select "settings" for the network your device is currently connected to. Look for the IPv4 value.
  • If you are on Windows: Select "Properties" for the network your device is connected to, then look for the IPv4 value.
  • If you are on Mac: Go to Settings > Network. You will find your IPv4 value below the wifi status.

The Obico Server needs to have an IP address that is accessible by OctoPrint or Klipper. It can be a private IP address (192.168.x.y, etc) but there needs to be a route between OctoPrint and the Obico Server.

It is also recommended that a static IP is set to avoid issues with changing IP's. Please look up your WiFi routers guide on how to do this.

Login as Django admin

  1. Open Django admin page at http://your_server_ip:3334/admin/.

Note: If the browser complains "Can't connect to the server", wait for a couple more minutes. The web server container may still be starting up.

  1. Login with username [email protected], password supersecret. Once logged in, you can optionally (but are highly encouraged to) change the admin password using this link: http://your_server_ip:3334/admin/app/user/1/password/.

Configure Django site

  1. In the same browser window, go to the address http://your_server_ip:3334/admin/sites/site/1/change/. Change "Domain name" to your_server_ip:3334. No "http://", "https://" prefix or trailing "/", otherwise it will NOT work. Note: Deleting the original site and adding a new one won't work, thanks to the quirkiness of Django site.

  2. Click "Save". Yes it's correct that Django is not as smart as most people think. ;)

Site configuration

Note: If you are using reverse proxy, "Domain name" needs to be set to reverse_proxy_ip:reverse_proxy_port. See using a reverse proxy for details.

That's it! You now have a fully-functional Obico Server that your OctoPrint can talk to. We hope setting up the server has not been overwhelming.

Configure clients to use self-hosted Obico Server

Contribute to Obico

Feeling excited? Want to contribute? Check out how.

Difficulties in getting the Obico server up and running?

Browse and search in the Obico Server guide website. If you can't find the answer there, consult the Discord or open an issue.

Thanks

BrowserStack BrowserStack generously sponsors a free license so that I can test Obico webcam streaming on different browsers/versions. Moonraker for the source code to extract g-code metadata.

obico-server's People

Contributors

aarontuftss avatar ajdavids avatar apexarray avatar arcreigh avatar chand1012 avatar davidloenborg avatar dependabot[bot] avatar dipuzyrev avatar e-fominov avatar encetamasb avatar goopsie avatar insomaniac49 avatar ivanrybnikov avatar jcshumpert avatar kennethjiang avatar kuiwang2022 avatar lnjustin avatar lyricpants66133 avatar mallocarray avatar manjulsigdel avatar neilamhailey avatar neilhailey avatar nvtkaszpir avatar puviez avatar rai-oliveira avatar raymondh2 avatar saggit avatar smartin015 avatar wwsean08 avatar xamctbo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

obico-server's Issues

Server does not Compile

Performing updates on the server (or fresh installs) with commits past commit 19942c1 results in not being able to connect to the server locally and unable to configure.

Testing detection and smtp

I'm trying without success to test the detection system. I turned the sensitivity all the up and I'm trying to place a ball of spaghetti on the build plate mid print. It does get detected and sometimes rises to the red meter level but I never get a notification. I have tried moving the ball around as well as removing it and placing it back after a minute.

Is there a way to trigger a notice for purposes of testing the smtp? Or a recommended way to produce an alert to confirm the system is working?

Use Free-mobile home automation notification.

I don't know if is the same in every country, but in France, with the mobile operator "Free" we have an API for sending SMS on our number.

Its juste a GET request like
https://smsapi.free-mobile.fr/sendmsg?user=28XXXX47&pass=PWj20HXxXxXxsG&msg=Hello%20World.

It would be very nice if we can use this service for failure notification.

run on boot

On a ubuntu system how do we set it to run at boot?

Suggestions for a faster build process

Hi, I just stumbled upon your project, it looks really cool.
In the readme you mentioned the long build process. I just skimmed over your docker configuration and have a few suggestions:

  • ml_api: Use nvidia/cuda:9.0-runtime-ubuntu16.04 instead of nvidia/cuda:9.0-devel-ubuntu16.04, it should be sufficient and uses 1GB less space. -> Saves download time
  • web: Use python:3.6-alpine instead of python:3.6 it uses 850MB less space, vim and ffmpeg are available in alpine, vim can probably be removed. -> Saves download time
  • web: Make better use of docker caching: First add the requirements file, then pip install, then add the rest of the project. Using this the build process is much faster if no package requirements have changed. -> Faster development cycle
  • redis (minor): Use redis:alpine, uses 50 MB less space
  • all docker images: build them and upload them to Dockerhub. Most of the time the build process downloads and installs apt and pip packages. By prebuilding and uploading to Dockerhub, this can be consolidated into one docker pull for each image. If you still provide the Dockerfiles, anybody can build the images themselves when they want to debug something. -> Saves build time.

By using all suggestions above, you could get rid of all package installations (using the prebuilt image) which saves a lot of time and also save 1.9GB of Downloads.

Model is too complex

Hi,

Disclaimer: the below suggestions are in a effort to reduce this model size to where I can run it on my RPI3. My ML experience is solely in emotion detection.

I am looking over this code, because it's awesome, and looking at your model I believe it is too complex for the task at hand.

  • Too many convolutions: I see 23 convolutions and a respective number of pooling layers. This is too many, you are wasting too much space on useless data. Check the activation of your neurons as a percentage of the model size, you might be surprised how few are actually used.
  • Too many filters on convolutions: Many of the model convolutions have 512-1024 filters. This is way too many in my opinion -- past 32 or 64 filters is more or less useless at extracting meaningful data. I mean just look this Example Sigma = 6 convolution
  • Too large of an input picture: This is a harder problem to solve, but a more efficient model will arise when your input dimensions are smaller. I have not looked thoroughly through the code, however, you may want to explore recognizing the nozzle and carriage and take a fixed image bounding around the estimated nozzle location. This would be done through classical computer vision as opposed to machine learning. Or just follow the GCODE around and estimate the printer head location / dimensions.

If I have some free time this week I will look at the code and see if I can 'minify' the model and get it running on the pi.. That would be awesome and could be something you charge extra for as part of the service.

Add tags to Spaghetti Gallery

As a viewer of spaghetti gallery, I'd like to see the tags associated with each timelapse, and upvote existing tags or add new tags, such as "false alarm", "missed failure", "back light".

model.so is not compiled with multiple thread/OPENMP support

The current version of model.so that is provided in The Spaghetti Detective only runs on a single CPU core when not using GPU's. This is not very efficient.

I cloned https://github.com/AlexeyAB/darknet

modified Makefile Lines 6 and 7

OPENMP=1
LIBSO=1

Built libdarknet.so

Copied it to TheSpaghettiDetective/ml_api/bin and replaced the existing model.so with the new file.

Now TSD runs multi threaded and is much quicker on a system with lots of CPU cores but not Nvidia GPU.

screen shot 2019-03-06 at 16 47 01

More reliable prediction

Test the idea of using p-increase, rather than p itself. Maybe 5-rolling avg over 50 rolling avg?

Don't delete timelapses on image rebuild

Currently a docker-compose up --build results in an explicit deletion of the timelapses while rebuilding the web - container:

Deleting 'media/tsd-timelapses/private/1_p.json'
Deleting 'media/tsd-timelapses/private/2.mp4'
Deleting 'media/tsd-timelapses/private/1_tagged.mp4'
Deleting 'media/tsd-timelapses/private/1.mp4'
Deleting 'media/tsd-timelapses/private/1_poster.jpg'
Deleting 'media/tsd-timelapses/private/2_poster.jpg'
Deleting 'media/tsd-timelapses/private/2_p.json'
Deleting 'media/tsd-timelapses/private/2_tagged.mp4'

It didn't seem to matter if this directory was mounted as an external volume or not. This does seem counter-productive if you want to keep the timelapses for future training.

Never seeing octo/status view POST

I'm attempting to run my own server (not through docker-compose, but rather a custom K8s chart).

I am seeing POSTs to /octo/pic and I am able to see the image of my printer on the web UI, however, the web UI never shows the filename or estimated time left. The web container logs don't ever indicate any POSTs to /octo/status.

Any ideas on what to check next?

$ kubectl -n tsd logs the-spaghetti-detective-568cb69cc6-5b295 -c the-spaghetti-detective-web | grep -i 'POST /api/octo/pic' | wc -l`
16
$ kubectl -n tsd logs the-spaghetti-detective-568cb69cc6-5b295 -c the-spaghetti-detective-web | grep -i 'POST /api/octo/status' | wc -l
0

Concurrent Prediction

Current ml_api forces workers=1 because it calls out to native code and there are a lot of static bridging variables exist between Python and C. Need to have a way to enable concurrency > 1 when running prediction. Ideally it should be done in a way to maximize GPU utilization.

Different ports in web/Dockerfile and docker-compose.yml

For the container "web", the image is created from web/Dockerfile. This Dockerfile has EXPOSE 3333, but docker-compose.yml starts the service on port 3334 and defines ports "3334:3334". Wouldn't it make sense to get these differences in line and define just one port?

image in email broken

Not sure if it's my configuration that's to blame but when I receive the email saying my print might be failing (I have it set to just notify) the snapshot in the email is not showing

Computer specs

I have 16 printers that will soon move to a backyard office I'm hoping to build or possibly even a small rented office space and I would like to eventually have them all monitored by TSD. I'll likely have to build a new pc for the job so I'm trying to figure out how much computing power I'll need.

What specs would you recommend for such a setup? Is it even possible to build one machine to watch 16 printers? Or maybe it's less demanding than I'm picturing?

How about the specs for most other users who will monitor just one printer? How important is the gpu? I see watching one printer a lot of cpu usage but not so much on my nvidia gtx1060.

Waiting on connection

I have successfully installed the octoprint module and was able to connect and test using the beta servers. Then I wanted to try it locally so, I setup TSD server on my local computer and added the printer. Octoprint verifies the token correctly but it never connects.

bed clips false alarm

I get false alarms due to the bed clips holding my glass plate on. Is it possible maybe to train the model to ignore them? I have no idea how ML modeling works so forgive me if that's a ridiculous suggestion.

Compose failed

When running docker-compose up -d, here's what I get

ERROR: Service 'ml_api' failed to build: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:297: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown

Docker host is ubuntu 16.04, running Docker CE 18.06.3, Compose version 1.23.2

Explain minimum requirements

First off kudos for a great looking project ๐ŸŽ‰

It would be nice to have some indication of the hardware needed to run this server side bit. I believe CUDA only runs on specific hardware, and seeing the nvidia docker image, I guess you need an NVIDIA card.

Am a bit of a newbie on the subject, but would this project run on a low-end Nvidia Jetson Nano Developer Kit (99$) https://www.sparkfun.com/products/15297

I think it might be a great project for tinkerers (into 3D printing) that want to look into machine learning and image processing. Thanks for sharing ๐Ÿ‘ ๐Ÿ‘

windows ml-api build fail

I was able to start the build under windows in powershell by cd to the TSD directory and then the normal docker-compose up -d, I'm sure there is a syntax that would make it one line but I'm not that versed.

The problem I get is:

ERROR: Service 'ml_api' failed to build: The command '/bin/sh -c wget --quiet -O model/model.weights $(cat model/model.weights.url)' returned a non-zero code: 8

Any ideas? I checked the url in the file and was able to download it manually in a browser so it's reachable from my pc

Streaming video broken?

hi,

I was having issues connecting the plugging and this get fixed here
#26

Now, it connects and the image appear when the printer is not printing. When you start pringint we have no video, and also no detection.

Let me know if I can attach some logs, or need more information.

(The idea is amazing!)
Thanks!

Add "Do not pause on failure for current print" toggle to My Printer page

When user observes false positives or otherwise don't want the print to be paused on possible failures, and they don't want to change the settings since they are afraid they will forget to change it back. This toggle that's effective only for current print is a convenient way to temporarily disable pause-on-print.

Should not send images when printer is inactive

The plugin connects to 'app.thespaghettidetective.com' once every 10 seconds to send only one picture from the webcam for the server to run algorithm on. In a 12 hour period the octopi connected to 'app.thespaghettidetective.com' approximately 11K times, while the printer was idle.

The plug in should only function when octopi senses a print.

Building/rebuilding the so file

I was curious if you could share some details about the actual nn running. How was it generated and is it possible to rebuild the nn ourself?

fedora29, docker fail to build

[root@gedora TheSpaghettiDetective]# docker-compose up -d
Building ml_api
Step 1/11 : FROM nvidia/cuda:9.0-devel-ubuntu16.04
Trying to pull repository docker.io/nvidia/cuda ...
sha256:64b79f57b0f0ddfdb21f6f0c45c900c1a9a3751e87c3dce681b361bbe4163fe1: Pulling from docker.io/nvidia/cuda
7b722c1070cd: Pull complete
5fbf74db61f1: Pull complete
ed41cb72e5c9: Pull complete
7ea47a67709e: Pull complete
52efd3da8bcd: Pull complete
eea82f174227: Pull complete
0d7845ca9ae6: Pull complete
cfbd609f9a85: Pull complete
ed1cb7fbcbd9: Pull complete
Digest: sha256:64b79f57b0f0ddfdb21f6f0c45c900c1a9a3751e87c3dce681b361bbe4163fe1
Status: Downloaded newer image for docker.io/nvidia/cuda:9.0-devel-ubuntu16.04
 ---> c24bd4961e81
Step 2/11 : RUN apt update
 ---> Running in a27a529a13ec

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Err:1 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64  InRelease
  Operation timed out after 0 milliseconds with 0 out of 0 bytes received
Err:2 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64  InRelease
  Operation timed out after 0 milliseconds with 0 out of 0 bytes received
Ign:3 http://archive.ubuntu.com/ubuntu xenial InRelease
Ign:4 http://security.ubuntu.com/ubuntu xenial-security InRelease
Ign:5 http://archive.ubuntu.com/ubuntu xenial-updates InRelease
Err:6 http://security.ubuntu.com/ubuntu xenial-security Release
  Connection failed [IP: 91.189.88.162 80]
Ign:7 http://archive.ubuntu.com/ubuntu xenial-backports InRelease
Err:8 http://archive.ubuntu.com/ubuntu xenial Release
  Connection failed [IP: 91.189.88.152 80]
Err:9 http://archive.ubuntu.com/ubuntu xenial-updates Release
  Connection failed [IP: 91.189.88.149 80]
Err:10 http://archive.ubuntu.com/ubuntu xenial-backports Release
  Connection failed [IP: 91.189.88.152 80]
Reading package lists...
E: The repository 'http://security.ubuntu.com/ubuntu xenial-security Release' does not have a Release file.
E: The repository 'http://archive.ubuntu.com/ubuntu xenial Release' does not have a Release file.
E: The repository 'http://archive.ubuntu.com/ubuntu xenial-updates Release' does not have a Release file.
E: The repository 'http://archive.ubuntu.com/ubuntu xenial-backports Release' does not have a Release file.
ERROR: Service 'ml_api' failed to build: The command '/bin/sh -c apt update' returned a non-zero code: 100
[root@gedora TheSpaghettiDetective]#

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.