Git Product home page Git Product logo

trainbot's Introduction

Onlytrains

Watches a piece of train track, detects passing trains, and stitches together images of them. Should work with any video4linux USB cam, or Raspberry Pi camera v3 modules.

Frontend: https://trains.jo-m.ch/

Another known deployment is at https://trains.shakik.de.

A collection of some "special" sightings.

The name Onlytrains is credited to @timethy.

It also contains some packages which might be useful for other purposes:

The binaries are currently built and tested on X86_64 and a Raspberry Pi 4 B.

Assumptions and notes on computer vision

The computer vision used in trainbot is fairly naive and simple. There is no camera calibration, image stabilization, undistortion, perspective mapping, or "real" object tracking. This allows us to stay away from complex dependencies like OpenCV, and keeps the computational requirements low. All processing happens on CPU.

The assumptions are (there might be more implicit ones):

  1. Trains only appear in a (manually) pre-cropped region.
  2. The camera is stable and the image does not move around in any direction.
  3. There are no large fast brightness changes.
  4. Trains have a given min and max speed (configurable).
  5. We are looking at the tracks more or less perpendicularly in the chosen image crop region.
  6. Trains are coming from one direction at a time, crossings are not handled properly
  7. In practice, they happen and lead to the result of one train being chopped up, e.g. https://trains.jo-m.ch/#/trains/19212.
  8. Trains have a constant acceleration (might be 0) and do not stop and turn around while in front of the camera.
  9. In reality, this is often not true, there happens to be a stop signal right in front of my balcony...

Documentation

As this is just a hobby project for me, the documentation is pretty sparse. This very README is the most important part of it. To deploy this project yourself, you should have some basic sysadmin, web servers, and ideally Go knowledge. When in doubt, the source of truth is ... the source code.

All config options can be passed as ENV vars or CLI flags. See config struct on top of cmd/trainbot/main.go, or run trainbot --help to see all options.

The two Makefiles (root and frontend/) also contain some hints.

Deployment

There are two parts to deploy: First, the Go binary which detects trains, and second the web frontend.

How to get binaries? There are multiple options:

  1. go install github.com/jo-m/trainbot/cmd/trainbot@latest - Let Go build and install the binary on your system.
  2. Grab a binary from the latest CI run at https://github.com/jo-m/trainbot/actions
  3. Build via tooling in this repo - see Development

Raspberry Pi

Run the interactive tool to adjust camera and select a crop rectangle:

# On the host machine
make deploy_confighelper host=TRAINBOT_DEPLOY_TARGET_SSH_HOST
# Example:
make deploy_confighelper [email protected]

# On the raspberry pi
sudo usermod -a -G video pi
# The --input arg has to be adapted to your actual camera config.
./confighelper-arm64 --log-pretty --input=picam3 --listen-addr=0.0.0.0:8080

Example "Production" deployment to a remote host (will install a systemd user unit):

First, you need to create a env file (copy env.example). Then, from the host machine:

make deploy_trainbot host=TRAINBOT_DEPLOY_TARGET_SSH_HOST

# To see logs, on the target device:
journalctl --user -eu trainbot.service

Download latest data from Raspberry Pi:

ssh "$TRAINBOT_DEPLOY_TARGET_SSH_HOST" sqlite3 trainbot/data/db.sqlite3
.backup trainbot/data/db.sqlite3.bak
# Ctrl+D
rsync --verbose --archive --rsh=ssh "$TRAINBOT_DEPLOY_TARGET_SSH_HOST:trainbot/data/" data/
rm data/db.sqlite3-shm data/db.sqlite3-wal
mv data/db.sqlite3.bak data/db.sqlite3

Frontend

Web frontend

The frontend is a VueJS SPA app written in Typescript. It consists of only static files (after the JS build process). There is no web backend, the frontend simply loads the entire SQLite database from the server, and then does all the queries itself. This means that the frontend can be deployed entirely independently from the trainbot binary, as long as there is some way for the date (db + pics) to get to the web server.

My setup

My Raspberry Pi is not exposed to the internet, and I also already had a web hosting account with FTP access available. Thus, in my setup, the binary and the frontend are running on two entirely different machines in two different networks.

The frontend is built and deployed via:

export FRONTEND_DEPLOY_TARGET_SSH_HOST=myuser@mywebserver:/var/www/trains/
cd frontend
make deploy

The binary on the Raspberry Pi in my home network will upload pictures and the updated db file via FTP to this webspace whenever a new train is detected. This is configured via the ENABLE_UPLOAD=true and UPLOAD_... env vars (or the corresponding CLI flags).

Alternative uploaders (e.g. SFTP, SCP, WebDAV, ...) could be pretty easily implemented (but they are not because I do not need them). For this, the Uploader interface from internal/pkg/upload/upload.go needs to be implemented, and corresponding configuration options added.

Hosting the frontend on the same machine

It is possible to deploy the fronted on the same machine where trainbot runs. There is no finished solution provided in this repo, but some hints are here:

  • Install an arbitrary static web server (Nginx, Apache, Caddy, ...).
    • A webserver could also added to the trainbot binary itself, see e.g. here, PRs welcome.
    • As wwwroot, this webserver needs the build output of the frontend, i.e. cd frontend; make build; [s]cp dist /var/www/trains.
  • Set up the trainbot binary to have its data directory somewhere inside the wwwroot via --data-dir / DATA_DIR.
    • Assuming the wwwroot is /var/www/trains, trainbot would be running with --data-dir=/var/www/trains/data

Note that this can lead to transient inconsistencies when the web server is delivering the sqlite file at the same time the binary is writing to it. The clean solution would be to add another "local FS" uploader to trainbot (see previous section).

Hardware

I use a Raspberry Pi 4 Mod B with 2GiB of RAM, and a Raspberry Pi Camera v3 (narrow lens). Distance from camera to tracks is ca. 50m.

All this is installed on my balcony in a waterproof case, as seen in the MagPi Magazine.

The case is this one from AliExpress: https://www.aliexpress.com/item/1005003010275396.html

3D Prints

Errata (not corrected in the models linked above):

  • The RPI mounting plate is 1-2mm too wide, because the 86mm stated in the picture on the Aliexpress product page are in reality a bit less.
    • You can solve that by changing the 3d design, or by cutting off a bit from the print. It might however also depend on your specific case.
  • The RPI USB C power plug does not fit into the case, because the case is in the way. I solved this by cutting it off and soldering the cable to the connector perpendicularly. You can probably fix this by changing the 3D design to move the RPI to the left as far as possible.

Development

This repo is set up to compile for x86_64 and aarch64. There is support for building on your machine directly, or inside a Docker container.

Also, there is an extensive test suite. Tests may also be executed locally, or inside Docker.

The single entrypoint for everything (incl. Docker) is the Makefile. You can list available targets via make list. Same is true for the frontend - check out frontend/Makefile.

Example:

git clone https://github.com/jo-m/trainbot
cd trainbot
make docker_build

# Find binaries in build/ after this has completed.

V4L Settings

# list
ffmpeg -f v4l2 -list_formats all -i /dev/video2
v4l2-ctl --all --device /dev/video2

# exposure
v4l2-ctl -c exposure_auto=3 --device /dev/video2

# autofocus
v4l2-ctl -c focus_auto=1 --device /dev/video2

# fixed
v4l2-ctl -c focus_auto=0 --device /dev/video2
v4l2-ctl -c focus_absolute=0 --device /dev/video2
v4l2-ctl -c focus_absolute=1023 --device /dev/video2

ffplay -f video4linux2 -framerate 30 -video_size 3264x2448 -pixel_format mjpeg /dev/video2
ffplay -f video4linux2 -framerate 30 -video_size 1920x1080 -pixel_format mjpeg /dev/video2

ffmpeg -f v4l2 -framerate 30 -video_size 3264x2448 -pixel_format mjpeg -i /dev/video2 output.avi

RasPi Cam v3 utils

# setup
sudo apt-get install libcamera0 libcamera-apps-lite
sudo apt install -y vlc

# grab frame
# https://www.raspberrypi.com/documentation/computers/camera_software.html#libcamera-and-libcamera-apps
libcamera-jpeg -o out.jpg -t 1 --width 4608 --height 2592 --rotation 180 --autofocus-mode=manual --lens-position=2
libcamera-jpeg -o out.jpg -t 1 --width 2304 --height 1296 --rotation 180 --autofocus-mode=manual --lens-position=4.5 --roi 0.25,0.5,0.5,0.5

# record video
DATE=$(date +'%F_%H-%M-%S'); libcamera-vid -o $DATE.h264 --save-pts $DATE.txt --width 1080 --height 720 --rotation 180 --autofocus-mode=manual --lens-position=0 -t 0

# stream through network
libcamera-vid -t 0 --inline --nopreview --width 4608 --height 2592 --rotation 180 --codec mjpeg --framerate 5 --listen -o tcp://0.0.0.0:8080 --autofocus-mode=manual --lens-position=0 --roi 0.25,0.5,0.5,0.5
# on localhost
ffplay http://pi4:8080/video.mjpeg

# manually record video for test cases
libcamera-vid \
   --verbose=1 \
   --timeout=0 \
   --inline \
   --nopreview \
   --width 240 --height 280 \
   --roi 0.429688,0.185185,0.104167,0.216049 \
   --mode=2304:1296:12:P \
   --framerate 30 \
   --autofocus-mode=manual --lens-position=0.000000 \
   --rotation=0 \
   -o vid.h264 --save-pts vid-timestamps.txt

mkvmerge -o test.mkv --timecodes 0:vid-timestamps.txt vid.h264

Code notes

  • Zerolog is used as logging framework
  • "Library" code uses panic(), "application" code use log.Panic()...

Prometheus metrics/Grafana

For debugging and tweaking a Prometheus-compatible endpoint can be exposed at port 18963 using --prometheus=true. A Grafana dashboard is also available.

Flow chart for frame data


           libcamera-vid
                 │
                 ▼
        ┌─────────────────┐
        │                 │
        │   source queue  │
        │                 │
        └─────────────────┘
                 │
                 ▼
             findOffset ──────► discard
                 │
              record
                 │
                 ▼
          ┌────────────┐
          │            │
          │  sequence  │
          │            │
          └────────────┘
                 │
                 ▼
               fitDx
                 │
                 ▼
              stitch
                 │
                 ▼
           ┌───────────┐
           │           │
           │   image   │
           │           │
           └───────────┘

TODOs

  • Fix false positives in darkness
  • Add machine learning to classify trains (MobileNet, EfficientNet, https://mediapipe-studio.webapps.google.com/demo/image_classifier)
  • Add run/deploy instructions to README (including confighelper)
  • Maybe compress URL params - favorites list is getting longer and longer...
  • Remote blob cleanup is broken due to FTP LIST being restricted to 99998 entries by remote - use sftp instead

trainbot's People

Contributors

clonejo avatar jo-m avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

trainbot's Issues

Discrepancy in train lengths between left-moving and right-moving trains

There seems to be a mismatch in the estimated length between similar trains moving in different directions.

For instance, this left moving train is estimated as having a length of 205 m:
https://trains.jo-m.ch/#/trains/19476

Which is consistent with a similar left-moving train:
https://trains.jo-m.ch/#/trains/19494

However, right-moving trains at around the same time are considerably shorter, with this one estimated at 186 m:
https://trains.jo-m.ch/#/trains/19473

Or this one at 185 m:
https://trains.jo-m.ch/#/trains/19491

The same can be observed with other units. These left-going units are estimated at 104 m:
https://trains.jo-m.ch/#/trains/19410
https://trains.jo-m.ch/#/trains/19409

Whereas their right-moving friends are shorter, estimated at 98 and 97 m respectively:
https://trains.jo-m.ch/#/trains/19413
https://trains.jo-m.ch/#/trains/19414

What is also interesting, is that the left-moving units seem to be consistent in their estimated lengths at different speeds, which is no the case with similar right-moving trains.

Tips for cropping?

I am nearly half a kilometer from the tracks but do have a view of passing trains. But I am not getting great results (sidestepping the issues of #15 for now by passing it directly a file path instead of V4L) and wondering if you have any advice/insight.

With the annotated crop region it does detect the motion of the train:

webcam image of train with a rectangle drawn over partial view of train

But it does not combine the whole train, only small bits. This is the first image, i.e. it misses the engine completely:
train_00010101_000038 75_Z

followed by:

train_00010101_000039 75_Z

and:

train_00010101_000136 708_Z

and a nice little stretch:

train_00010101_000156 791_Z

but then that's it for the whole train!

Any tips as to what crop region I should be aiming for? For example, if I expand the box to include the train on both sides of the tree, trainbot seems to just not trigger on the motion at all.

Admittedly this is not very ideal view of the train, but it seems to be showing some promise! 🥳 Any tips as to what to try? What sorts of things matter most — would the first step be to try straighten out the camera so it's not angled? adjust the crop region bigger or smaller? use a longer telephoto lens for more pixels/meter?

Exit program due to configuration error?

Considering this error case:

if frameRGBA.Rect.Dx() < maxDx*3 {
log.Error().Int("dx", frameRGBA.Rect.Dx()).Int("maxDx*3", maxDx*3).Float64("framePeriodS", framePeriodS).Msg("image is not wide enough to resolve the given max speed")
return nil

As far as i understand, this just comes down to the --rect-w, --px-per-m and --max-speed-kph parameters. So it's always a static configuration mistake. I propose exiting the program early with an error code.

"frame period too small" warning

Trying to use trainbot on a test recording from a longer-lensed new USB camera, I end up just getting hundreds of errors logged like:

{"level":"warn","framePeriodS":0.000011111,"time":"2024-05-01T03:04:48.367Z","caller":"/src/internal/pkg/stitch/auto.
go:237","message":"frame period too small"}

And no stiched output.

I see this would be logged by

log.Warn().Float64("framePeriodS", framePeriodS).Msg("frame period too small")
introduced in fc92da1 but unclear why — if the frame period is too small, does that mean the frame frequency (i.e. fps) of my camera is too fast?

QuickTime player on my main laptop reports it as 32.34fps which doesn't seem particularly high.

"Killed" (processing from 3MP input sample mp4)

When processing a small rectangle out of a large video, trainbot seems to detect a train but then ends up "Killed." (exit code 137) inside my VM which has 3.8Gi of RAM plus 11Gi of swap added which lets it run slightly longer after starting to process a train. This is with a local sample file which it processes through ffmpeg. The source file is 2304 × 1296 pixels but the crop region is only e.g. 192 x 133.

If I pre-crop the video sample video e.g. ffmpeg -i sample-rot.mp4 -filter:v "crop=198:133:1403:284" sample-crop.mp4 then I am able to completely process the feed with trainbot. So I wonder if it is somehow trying to keep in memory not just the crop region but all the original (whole) frames too?

Invalid DateTime

Hello! I'm having loads of fun setting up OnlyTrains. A (static) set of recordings is up at https://trains.shakik.de/ (maybe we should have a list of installations somewhere? I only found yours.)

As a prototype i made some videos with a zoom camera and processed them on an Arch Linux machine. I don't remember anymore how i built the trainbot binary, unfortunately.

In the frontend i am getting 'Invalid DateTime' instead of the timestamp. I have noticed that the timestamps in my db look like this:
2023-11-05 18:14:15.627118644 +0000 UTC,
whereas on trains.jo-m.ch they look like this:
2023-11-10 13:59:03.683752298+00:00.
When i manually drop the UTC from the timestamp the frontend is able to parse it. (I have fixed it manually in the DB on my website.)

I'm not a Go expert, but i may dig deeper into the Go code next.

Camera Setup

After some prototype runs with a digital camera, now i want to pick a camera/lens for the permanent setup …

At 60m distance and catenary wires usually at 5.5m height, i have calculated i'll need a ~6° field of view. I am hoping i can still get some OK-ish recordings at night, the track is actually lit by Sodium vapor lights.

Currently looking at the Raspberry Pi HQ camera + Waveshare 18256 zoom lens. That would be around 100€ (plus the Raspberry Pi).

And when i have the final camera setup i'll have to tweak the detection a bit, some trains are getting sliced up and some not detected at all.

YUYV from V4L panics with "img does not implement SubImage()" error

I'm hoping to get this working with a V4L (loopback) device fed by ffmpeg. If not for testing (since only a few trains a day at my location) but also so that I can eventually feed it from a networked camera via a go2rtc RTSP feed. But for now just with a sample recording of a train going past to see it work.

So in one terminal I do:

sudo modprobe v4l2loopback

export FEED_URL=~/sample.mp4
export OUTPUT=/dev/video0

ffmpeg -i "$FEED_URL" -f v4l2 -pix_fmt yuyv422 "$OUTPUT"

And then in another terminal I do:

export DATA_DIR=~/trains
export INPUT=/dev/video0
export CAMERA_W=2304
export CAMERA_H=1296
export RECT_X=1152
export RECT_Y=818
export RECT_W=473
export RECT_H=242
export PX_PER_M=19  # e.g. 430 px, 23 m
# see https://github.com/jo-m/trainbot/blob/43ad8c9716f385ef0714a5057dfe751360844e8d/pkg/vid/cam.go#L213
# and `V4L2_PIX_FMT_…` in https://www.kernel.org/doc/html/v4.10/media/uapi/v4l/videodev.html#videodev2-h
#export CAMERA_FORMAT_FOURCC=422P
#export CAMERA_FORMAT_FOURCC=mjpg
export CAMERA_FORMAT_FOURCC=YUYV

./trainbot

This combination gets me the farthest without general opening/format errors but then I am left with an error:

{"level":"panic","error":"img does not implement SubImage()","time":"2023-11-22T02:18:16.864Z","caller":"/src/cmd/trainbot/main.go:162","message":"failed to crop frame"}

Any advice?

Weatherproof case - where to buy?

Hi Jonathan,

i saw the picture of the weatherproof case you are using in the current MagPi issue and would kindly like to ask where you bought it? Or is it self-printed?

kind regards,

Christoph

Use image stacking instead of gluing slices together

Currently, trainbot pieces slices of the moved train together. This works well when within the used crop rectangle the view is unobstructed, and the exposure/lighting is similar in the X axis.

Unfortunately, i have to deal with some obstruction from foliage:
train_20231125_120357 38_+01 00

One can kinda reduce this by narrowing the crop rectangle, but i am already quite limited.

But since we have each piece of the train exposed at least twice (with the current movement detection code, even three times), we can stack them together. Here is an example using a better camera, manual stacking of three frames in GIMP, and then enfuse for stacking:
enfuse
compared to a single frame:
manuall-stacked-637

We could even pick out the parts that don't change between frames, and create a non-rectangular mask for picking only the unobstructed parts.

Possible implementation: We pretty much have all the pieces already, we'll just have to save each frame placed into a separate otherwise transparent image file, and then run enfuse over all the files.

panic: frame bounds or size not consistent, this should not happen

Just trying this out...

trainbot --log-pretty --input /dev/video0 --camera-format-fourcc=MJPG -X 600 -Y 340 -W 400 -H 300 --px-per-m=40

[...]
12:20AM INF github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:278 > start of new sequence
12:20AM INF github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:191 > end of sequence, trying to stitch
12:20AM INF github.com/jo-m/trainbot/internal/pkg/stitch/stitch.go:194 > fitAndStitch() dx=[86,88,1,44,54,52,50,53,42,42,1,0,59,46,36,33,30,32,38,47,46,37,1,42,44,41,44,48,47,56,72,35,0,45,45,51,52,0,0,0,0,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0] len(frames)=57
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:194 > unable to fit and stitch sequence error="was not able to fit the sequence: RANSAC unsuccessful"
12:20AM INF github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:278 > start of new sequence
12:20AM INF github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:191 > end of sequence, trying to stitch
12:20AM INF github.com/jo-m/trainbot/internal/pkg/stitch/stitch.go:194 > fitAndStitch() dx=[-71,-84,-114,-99,-67,-59,-59,-56,-54,-53,-37,-29,-41,-41,-36,-42,-44,-46,-45,-44,-42,-27,-29,-28,-30,-38,-36,-39,-44,-48,-13,-53,-52,-47,-48,-27,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0] len(frames)=55
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:194 > unable to fit and stitch sequence error="was not able to fit the sequence: RANSAC unsuccessful"
12:20AM INF github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:278 > start of new sequence
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.100236147 maxDx*3=537
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.100061264 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.099759827 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.100673147 maxDx*3=537
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.099532631 maxDx*3=531
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.099793561 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.100027111 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.09994344 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.099959504 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.100374644 maxDx*3=537
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.099873669 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.099777077 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.100093671 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.099918577 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.099952101 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.10011141 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.10016414 maxDx*3=537
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.099359983 maxDx*3=531
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.100380231 maxDx*3=537
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.099905447 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.100090387 maxDx*3=534
12:20AM ERR github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:238 > image is not wide enough to resolve the given max speed dx=400 framePeriodS=0.100052674 maxDx*3=534
12:20AM INF github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:191 > end of sequence, trying to stitch
12:20AM INF github.com/jo-m/trainbot/internal/pkg/stitch/stitch.go:194 > fitAndStitch() dx=[47,55,66,54,1,1,46,55,51,51,62,60,53,39,39,0,51,50,48,46,44,41,44,52,43,44,0,0,1,-6,4,-1,0,1,-1,0,0,0,14,11,7,5,3,1,0,0,0] len(frames)=47
12:20AM INF github.com/jo-m/trainbot/internal/pkg/stitch/stitch.go:46 > stitch() dx=[64,63,58,62,60,56,59,57,53,55,55,50,53,52,48,50,49,45,47,46,42,44,43,40,42,39,249,6,5,5,3,3,2,0,0,-1,-2,-3,-4,-5,-6,-7,-8,-8] len(frames)=44
12:20AM PNC github.com/jo-m/trainbot/internal/pkg/stitch/stitch.go:60 > frame bounds or size not consistent, this should not happen
12:20AM INF github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:187 > nothing to assemble
panic: frame bounds or size not consistent, this should not happen

goroutine 1 [running]:
github.com/rs/zerolog.(*Logger).Panic.func1({0x1009f9a?, 0x0?})
	github.com/rs/[email protected]/log.go:376 +0x2d
github.com/rs/zerolog.(*Event).msg(0xc002436c00, {0x1009f9a, 0x3b})
	github.com/rs/[email protected]/event.go:156 +0x2a5
github.com/rs/zerolog.(*Event).Msg(...)
	github.com/rs/[email protected]/event.go:108
github.com/jo-m/trainbot/internal/pkg/stitch.stitch({0xc00048b400, 0x2c, 0x1da0480?}, {0xc000148580, 0x2c, 0x2c})
	github.com/jo-m/trainbot/internal/pkg/stitch/stitch.go:60 +0x39c
github.com/jo-m/trainbot/internal/pkg/stitch.fitAndStitch({0xc0002027c8, {0xc00048b400, 0x2c, 0x40}, {0xc000038800, 0x2c, 0x40}, {0xc00015e000, 0x2c, 0x40}}, ...)
	github.com/jo-m/trainbot/internal/pkg/stitch/stitch.go:232 +0x585
github.com/jo-m/trainbot/internal/pkg/stitch.(*AutoStitcher).TryStitchAndReset(0xc0000cbac0)
	github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:192 +0x19e
github.com/jo-m/trainbot/internal/pkg/stitch.(*AutoStitcher).Frame(0xc0000cbac0, {0x1316250, 0xc000054d00}, {0x7ffedb62ce1c?, 0xb?, 0x1da0480?})
	github.com/jo-m/trainbot/internal/pkg/stitch/auto.go:265 +0x852
main.detectTrainsForever({{0x1, {0xee686d, 0x4}}, {0x7ffedb62ce1c, 0xb}, {0x7ffedb62ce3f, 0x4}, 0x780, 0x438, 0x258, ...}, ...)
	github.com/jo-m/trainbot/cmd/trainbot/main.go:165 +0x606
main.main()
	github.com/jo-m/trainbot/cmd/trainbot/main.go:350 +0x7c5

Two trains in one image

I was scrolling through trying to find the longest train and I found a pair of trains recorded as 587m.
A pair of trains

Looking at the animation, it appears that the two trains passed in opposite directions. They did not overlap, but were quite close to each other.
A pair of trains passing in opposite directions

For some reason, the image stitching algorithm put them both together as one train and also stitched the second train in the wrong direction.

Alignement

Thanks for sharing this fun project.

Could it be possible that the camera is not perfectly aligned with the track? I noticed some jagged features on the stitch lines
image
It looks like something that could be solved in hardware or software.

Alternative frontend

Hello jo-m,

i have started an alternative frontend with server-side rendering, a demo can be seen at https://trains.shakik.de/s/

It is pretty bare-bones for now, and i don't intend for it to reach feature-parity with your Vue frontend.

Notable features:

  • automatic reloading of the front page whenever a new train has been recorded
  • horizontally scrollable thumbnails
  • loads fairly quickly
  • (best of trains list, but that requires an additional table with view statistics, and it's broken with trains_v2)

Currently i have put up the code at https://gitlab.aachen.ccc.de/clonejo/onlytrains-frontend-rs/.

I am wondering if you would want to take the code into the main trainbot repo, but i assume you don't want to take maintainership for a bunch of Rust code :)
It is probably better if i just host a separate repo, then it is clear the frontend is not always kept up to date with trainbot and i can just push/review changes myself.

Best,
clonejo

bad sample in test_more

Love the big test suite!

(Downloaded from https://trains.jo-m.ch/testdata.zip)

2023/11/12 22:25:40 compiled command: ffmpeg -i testdata/set2/train180.mkv -f rawvideo -pix_fmt rgba pipe:
    auto_set0_test.go:81:
        	Error Trace:	/home/clonejo/onlytrains/trainbot/internal/pkg/stitch/auto_set0_test.go:81
        	            				/home/clonejo/onlytrains/trainbot/internal/pkg/stitch/auto_set2_test.go:28
        	Error:      	Should be false
        	Test:       	Test_AutoStitcher_Set2_All
        	Messages:   	expected 1 train(s) but 2 detected: testdata/set2/train180.mkv

This video actually has two trains.

I also get failures on set2/train050.mkv and set2/train073.mkv, but those have one train that is just hard to detect.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.