Git Product home page Git Product logo

tubearchivist's Introduction

Tube Archivist
more screenshots and video

tubearchivist-docker tubearchivist-github-star tubearchivist-github-forks tubearchivist-discord

Table of contents:


Core functionality

Once your YouTube video collection grows, it becomes hard to search and find a specific video. That's where Tube Archivist comes in: By indexing your video collection with metadata from YouTube, you can organize, search and enjoy your archived YouTube videos without hassle offline through a convenient web interface. This includes:

  • Subscribe to your favorite YouTube channels
  • Download Videos using yt-dlp
  • Index and make videos searchable
  • Play videos
  • Keep track of viewed and unviewed videos

Resources

Installing

For minimal system requirements, the Tube Archivist stack needs around 2GB of available memory for a small testing setup and around 4GB of available memory for a mid to large sized installation. Minimal with dual core with 4 threads, better quad core plus. This project requires docker. Ensure it is installed and running on your system.

The documentation has additional user provided instructions for Unraid, Synology, Podman and True NAS.

The instructions here should get you up and running quickly, for Docker beginners and full explanation about each environment variable, see the docs.

Take a look at the example docker-compose.yml and configure the required environment variables.

TubeArchivist:

Environment Var Value
TA_HOST Server IP or hostname Required
TA_USERNAME Initial username when logging into TA Required
TA_PASSWORD Initial password when logging into TA Required
ELASTIC_PASSWORD Password for ElasticSearch Required
REDIS_HOST Hostname for Redis Required
TZ Set your timezone for the scheduler Required
TA_PORT Overwrite Nginx port Optional
TA_UWSGI_PORT Overwrite container internal uwsgi port Optional
TA_ENABLE_AUTH_PROXY Enables support for forwarding auth in reverse proxies Read more
TA_AUTH_PROXY_USERNAME_HEADER Header containing username to log in Optional
TA_AUTH_PROXY_LOGOUT_URL Logout URL for forwarded auth Optional
ES_URL URL That ElasticSearch runs on Optional
ES_DISABLE_VERIFY_SSL Disable ElasticSearch SSL certificate verification Optional
ES_SNAPSHOT_DIR Custom path where elastic search stores snapshots for master/data nodes Optional
HOST_GID Allow TA to own the video files instead of container user Optional
HOST_UID Allow TA to own the video files instead of container user Optional
ELASTIC_USER Change the default ElasticSearch user Optional
REDIS_PORT Port that Redis runs on Optional
TA_LDAP Configure TA to use LDAP Authentication Read more
ENABLE_CAST Enable casting support Read more
DJANGO_DEBUG Return additional error messages, for debug only

ElasticSearch

Environment Var Value State
ELASTIC_PASSWORD Matching password ELASTIC_PASSWORD from TubeArchivist Required
http.port Change the port ElasticSearch runs on Optional

Update

Always use the latest (the default) or a named semantic version tag for the docker images. The unstable tags are only for your testing environment, there might not be an update path for these testing builds.

You will see the current version number of Tube Archivist in the footer of the interface. There is a daily version check task querying tubearchivist.com, notifying you of any new releases in the footer. To update, you need to update the docker images, the method for which will depend on your platform. For example, if you're using docker-compose, run docker-compose pull and then restart with docker-compose up -d. After updating, check the footer to verify you are running the expected version.

  • This project is tested for updates between one or two releases maximum. Further updates back may or may not be supported and you might have to reset your index and configurations to update. Ideally apply new updates at least once per month.
  • There can be breaking changes between updates, particularly as the application grows, new environment variables or settings might be required for you to set in the your docker-compose file. Always check the release notes: Any breaking changes will be marked there.
  • All testing and development is done with the Elasticsearch version number as mentioned in the provided docker-compose.yml file. This will be updated when a new release of Elasticsearch is available. Running an older version of Elasticsearch is most likely not going to result in any issues, but it's still recommended to run the same version as mentioned. Use bbilly1/tubearchivist-es to automatically get the recommended version.

Getting Started

  1. Go through the settings page and look at the available options. Particularly set Download Format to your desired video quality before downloading. Tube Archivist downloads the best available quality by default. To support iOS or MacOS and some other browsers a compatible format must be specified. For example:
bestvideo[vcodec*=avc1]+bestaudio[acodec*=mp4a]/mp4
  1. Subscribe to some of your favorite YouTube channels on the channels page.
  2. On the downloads page, click on Rescan subscriptions to add videos from the subscribed channels to your Download queue or click on Add to download queue to manually add Video IDs, links, channels or playlists.
  3. Click on Start download and let Tube Archivist to it's thing.
  4. Enjoy your archived collection!

Port Collisions

If you have a collision on port 8000, best solution is to use dockers HOST_PORT and CONTAINER_PORT distinction: To for example change the interface to port 9000 use 9000:8000 in your docker-compose file.

For more information on port collisions, check the docs.

Common Errors

Here is a list of common errors and their solutions.

vm.max_map_count

Elastic Search in Docker requires the kernel setting of the host machine vm.max_map_count to be set to at least 262144.

To temporary set the value run:

sudo sysctl -w vm.max_map_count=262144

To apply the change permanently depends on your host operating system:

  • For example on Ubuntu Server add vm.max_map_count = 262144 to the file /etc/sysctl.conf.
  • On Arch based systems create a file /etc/sysctl.d/max_map_count.conf with the content vm.max_map_count = 262144.
  • On any other platform look up in the documentation on how to pass kernel parameters.

Permissions for elasticsearch

If you see a message similar to Unable to access 'path.repo' (/usr/share/elasticsearch/data/snapshot) or failed to obtain node locks, tried [/usr/share/elasticsearch/data] and maybe these locations are not writable when initially starting elasticsearch, that probably means the container is not allowed to write files to the volume.
To fix that issue, shutdown the container and on your host machine run:

chown 1000:0 -R /path/to/mount/point

This will match the permissions with the UID and GID of elasticsearch process within the container and should fix the issue.

Disk usage

The Elasticsearch index will turn to read only if the disk usage of the container goes above 95% until the usage drops below 90% again, you will see error messages like disk usage exceeded flood-stage watermark.

Similar to that, TubeArchivist will become all sorts of messed up when running out of disk space. There are some error messages in the logs when that happens, but it's best to make sure to have enough disk space before starting to download.

error setting rlimit

If you are seeing errors like failed to create shim: OCI runtime create failed and error during container init: error setting rlimits, this means docker can't set these limits, usually because they are set at another place or are incompatible because of other reasons. Solution is to remove the ulimits key from the ES container in your docker compose and start again.

This can happen if you have nested virtualizations, e.g. LXC running Docker in Proxmox.

Known limitations

  • Video files created by Tube Archivist need to be playable in your browser of choice. Not every codec is compatible with every browser and might require some testing with format selection.
  • Every limitation of yt-dlp will also be present in Tube Archivist. If yt-dlp can't download or extract a video for any reason, Tube Archivist won't be able to either.
  • There is no flexibility in naming of the media files.

Roadmap

We have come far, nonetheless we are not short of ideas on how to improve and extend this project. Issues waiting for you to be tackled in no particular order:

  • User roles
  • Audio download
  • Podcast mode to serve channel as mp3
  • Random and repeat controls (#108, #220)
  • Auto play or play next link (#226)
  • Multi language support
  • Show total video downloaded vs total videos available in channel
  • Download or Ignore videos by keyword (#163)
  • Custom searchable notes to videos, channels, playlists (#144)
  • Search comments
  • Search download queue
  • Configure shorts, streams and video sizes per channel

Implemented:

  • User created playlists [2024-04-10]
  • Add statistics of index [2023-09-03]
  • Implement Apprise for notifications [2023-08-05]
  • Download video comments [2022-11-30]
  • Show similar videos on video page [2022-11-30]
  • Implement complete offline media file import from json file [2022-08-20]
  • Filter and query in search form, search by url query [2022-07-23]
  • Make items in grid row configurable to use more of the screen [2022-06-04]
  • Add passing browser cookies to yt-dlp [2022-05-08]
  • Add SponsorBlock integration [2022-04-16]
  • Implement per channel settings [2022-03-26]
  • Subtitle download & indexing [2022-02-13]
  • Fancy advanced unified search interface [2022-01-08]
  • Auto rescan and auto download on a schedule [2021-12-17]
  • Optional automatic deletion of watched items after a specified time [2021-12-17]
  • Create playlists [2021-11-27]
  • Access control [2021-11-01]
  • Delete videos and channel [2021-10-16]
  • Add thumbnail embed option [2021-10-16]
  • Create a github wiki for user documentation [2021-10-03]
  • Grid and list view for both channel and video list pages [2021-10-03]
  • Un-ignore videos [2021-10-03]
  • Dynamic download queue [2021-09-26]
  • Backup and restore [2021-09-22]
  • Scan your file system to index already downloaded videos [2021-09-14]

User Scripts

This is a list of useful user scripts, generously created from folks like you to extend this project and its functionality. Make sure to check the respective repository links for detailed license information.

This is your time to shine, read this then open a PR to add your script here.

Donate

The best donation to Tube Archivist is your time, take a look at the contribution page to get started.
Second best way to support the development is to provide for caffeinated beverages:

Notable mentions

This is a selection of places where this project has been featured on reddit, in the news, blogs or any other online media, newest on top.

  • ycombinator: Tube Archivist on Hackernews front page, [2023-07-16][link]
  • linux-community.de: Tube Archivist bringt Ordnung in die Youtube-Sammlung, [German][2023-05-01][link]
  • noted.lol: Dev Debrief, An Interview With the Developer of Tube Archivist, [2023-03-30] [link]
  • console.substack.com: Interview With Simon of Tube Archivist, [2023-01-29] [link]
  • reddit.com: Tube Archivist v0.3.0 - Now Archiving Comments, [2022-12-02] [link]
  • reddit.com: Tube Archivist v0.2 - Now with Full Text Search, [2022-07-24] [link]
  • noted.lol: How I Control What Media My Kids Watch Using Tube Archivist, [2022-03-27] [link]
  • thehomelab.wiki: Tube Archivist - A Youtube-DL Alternative on Steroids, [2022-01-27] [link]
  • reddit.com: Celebrating TubeArchivist v0.1, [2022-01-09] [link]
  • linuxunplugged.com: Pick: tubearchivist โ€” Your self-hosted YouTube media server, [2021-09-11] [link] and [2021-10-05] [link]
  • reddit.com: Introducing Tube Archivist, your self hosted Youtube media server, [2021-09-12] [link]

Sponsor

Big thank you to Digitalocean for generously donating credit for the tubearchivist.com VPS and buildserver.

tubearchivist's People

Contributors

ainsey11 avatar ajgon avatar anonamouslyginger avatar bakkot avatar bbilly1 avatar borgmanjeremy avatar cclauss avatar danielbatterystapler avatar deltacodepl avatar dmynerd78 avatar dot-mike avatar extome9000 avatar gentoli avatar gigafyde avatar insuusvenerati avatar lamusmaser avatar lickitysplitted avatar merlinscheurer avatar mglinski avatar micah686 avatar n8detar avatar omarlaham avatar p0358 avatar pairofcrocs avatar phuriousgeorge avatar privateger avatar stevwonder avatar stratus-ss avatar technicallyoffbeat avatar wesleym77 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tubearchivist's Issues

WARNING: A terminally deprecated method in java.lang.System has been called

Discussed in https://github.com/bbilly1/tubearchivist/discussions/69

Originally posted by rastacalavera October 21, 2021
Trying this out for the first time and can't get past a docker-compose up command. Logs are showing the error in the title.
WARNING: A terminally deprecated method in java.lang.System has been called
There is a lot more obviously but I figured I would start here and if someone wants a paste bin of the full thing I could do that later.
I did run the command sudo sysctl -w vm.max_map_count=262144 which i think is suppose to help with memory allocation but I may be wrong with that thought.

Playing downloads on Safari

Hey there โ€“ I am finding that downloaded videos won't play in Safari browsers (Mac or iOS). This seems to be an issue with how the webserver provides 'range' data to clients. This is beyond my expertise to fix. Any thoughts?

Ohโ€ฆworks great on Chrome though :)

Unable to launch app

Hi there! I'm working on creating a helm chart for this lovely project and am running into some issues that I don't recognize because i've never worked on a python project before. I was wondering if you could help me diagnose this error which seems to be related to the elasticsearch connection.

You can see the progress of my POC helm chart in my fork https://github.com/insuusvenerati/tubearchivist/tree/feature/helm-chart

compiled with version: 10.2.1 20210110 on 17 October 2021 04:44:47
os: Linux-5.4.0-84-generic #94 SMP Sun Sep 19 04:06:53 UTC 2021
nodename: tubearchivist-867c8db47f-pw46v
machine: x86_64
clock source: unix
detected number of CPU cores: 8
current working directory: /app
writing pidfile to /tmp/project-master.pid
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** 
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address :8080 fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** 
Python version: 3.9.7 (default, Oct 12 2021, 02:43:43)  [GCC 10.2.1 20210110[]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x55b0d3910000
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** 
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 145808 bytes (142 KB) for 1 cores
*** Operational MODE: single process ***
Traceback (most recent call last):
  File "/app/./config/wsgi.py", line 16, in <module>
    application = get_wsgi_application()
  File "/usr/local/lib/python3.9/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application
    django.setup(set_prefix=False)
  File "/usr/local/lib/python3.9/site-packages/django/__init__.py", line 24, in setup
    apps.populate(settings.INSTALLED_APPS)
  File "/usr/local/lib/python3.9/site-packages/django/apps/registry.py", line 122, in populate
    app_config.ready()
  File "/app/./home/apps.py", line 49, in ready
    index_check()
  File "/app/./home/src/index_management.py", line 506, in index_check
    handler = ElasticIndex(index_name, expected_map, expected_set)
  File "/app/./home/src/index_management.py", line 173, in __init__
    self.exists, self.details = self.index_exists()
  File "/app/./home/src/index_management.py", line 179, in index_exists
    response = requests.get(url)
  File "/usr/local/lib/python3.9/site-packages/requests/api.py", line 75, in get
    return request('get', url, params=params, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/requests/api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 542, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 649, in send
    adapter = self.get_adapter(url=request.url)
  File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 742, in get_adapter
    raise InvalidSchema("No connection adapters were found for {!r}".format(url))
requests.exceptions.InvalidSchema: No connection adapters were found for 'elasticsearch-master:9200/ta_channel'
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. going in full dynamic mode ***
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** 
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 16)
spawned uWSGI worker 1 (pid: 28, cores: 1)
/usr/local/lib/python3.9/site-packages/celery/platforms.py:834: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
  warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
 
 -------------- celery@tubearchivist-867c8db47f-pw46v v5.1.2 (sun-harmonics)
--- ***** ----- 
-- ******* ---- Linux-5.4.0-84-generic-x86_64-with-glibc2.31 2021-10-20 17:05:59
- *** --- * --- 
- ** ---------- [config[]
- ** ---------- .> app:         tasks:0x7f10bb95d5b0
- ** ---------- .> transport:   redis://tubearchivist-rejson:6379//
- ** ---------- .> results:     disabled://
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- 
 -------------- [queues[]
                .> celery           exchange=celery(direct) key=celery
                
[tasks[]
  . home.tasks.check_reindex
  . home.tasks.download_pending
  . home.tasks.download_single
  . home.tasks.extrac_dl
  . home.tasks.rescan_filesystem
  . home.tasks.run_backup
  . home.tasks.run_manual_import
  . home.tasks.run_restore_backup
  . home.tasks.update_subscribed
[2021-10-20 17:06:00,006: INFO/MainProcess] Connected to redis://tubearchivist-rejson:6379//
[2021-10-20 17:06:00,023: INFO/MainProcess] mingle: searching for neighbors
[2021-10-20 17:06:01,067: INFO/MainProcess] mingle: sync with 1 nodes
[2021-10-20 17:06:01,068: INFO/MainProcess] mingle: sync complete
[2021-10-20 17:06:01,093: INFO/MainProcess] celery@tubearchivist-867c8db47f-pw46v ready.

ElasticSearch container fails to start on KDE Neon

I've followed the installation instructions.
When I run (as root) 'docker-compose up', I get the error mentioned in the documentation (AccessDeniedException).
I shut down the containers and ran the command 'chown 1000:0 /mount-point', where the mount point (as I saw in the YAML file), is /usr/share/elasticsearch/data.
That directory did not exist, so created it and then set the ownership, but the problem persists.
The exception from ES also says 'refer to the log at /usr/share/elasticsearch/logs/docker-cluster.log', but the logs directory did not exist. I created it, tried again, but the log file still does not get created.
The OS is KDE Neon (an Ubuntu derivative), docker-compose version is 1.25.0 and docker version is 20.10.7.

UI Change Request: Remove the word "Download:" from the download page list

On the tubearchivist download page (ie mydomain.com/downloads/), the download is shows nicely and has great info though the video title has "Download:" prepended to it.

It seems redundant to have that in the video title as we know we're on the downloads page. Though it also makes it a bit harder to read the titles.

Videos wont play

Ive left all setting at default. a couple things are happening

It seems dls are stalling and i have to hit download que multiple time to get it going again. nothing helpfule in logs

when playing a video in firefox (linux) i get
no video with supported format and mimetime found.
on chrome i get no controls and or anything, just a screen with the beginning videothumnail i assume

How can I donate?

Not being a technical person I can't donate my time, however, I'd love to be a monthly sponsor of this project.

Would you be willing to set up the Sponsor program so that I can help out :)

Keep up the awesome work!

v0.0.4 broke the download queue.

After updating from v0.0.3 to v0.0.4, I can no longer add videos to my download queue. When I click "Download now" the page hangs for a second and refreshes, with the same video still available to download and the download queue is empty.

These are the logs I'm getting: https://pastebin.com/w3q3fNbV

It refers to "192.168.1.4" quite often, which confuses me, becuse the IP that all 3 services use is "192.168.1.16"

I'm running unraid to host TubeArchivist, ElasticSearch, and Redis.

Automatic Deletion of watched items after a specified time

Hi just stumbled across this project and first impressions are awesome.

I wanted to ask if you would consider an option to automatically delete watched videos after a week or so.
My use case is not to archive every channel but just to watch the newest videos and after that I would like to reclaim the disk space.

Let me know what you think about it.

Greetings
cpt

Some more url formats would be nice

Channel links with the Channel name don't work:

tubearchivist      | {'csrfmiddlewaretoken': ['AY0tO0Dav2zXgzjttPYb4lEsQ5qgYn5NE4Fzk66r983FaufAOvSgNKSTb3mBAUFQ'], 'subscribe': ['https://www.youtube.com/c/veritasium']}
tubearchivist      | parsing subscribe ids failed!
tubearchivist      | ['https://www.youtube.com/c/veritasium']

As a workaround I currently copy the link from the channel name when watching a video. This link has the neded channel id.

Playlist links in this format: https://www.youtube.com/watch?v=aFPJf-wKTd0&list=UUHnyfMqiRRG1u-2MsSQLbXA&index=2 are parsed like one video but not as list.

Pagination for downloads page

I currently have 1000+ videos in the download queue. Requesting all the thumbnails at once from youtube slows down the page and is also too much requests I guess ๐Ÿ˜„

Server responds with 500 error when adding new channel

Running the latest tag on docker,

As soon as I try and add a channel (AVE for example) it returns a 500 response,

Logs dont give much away:

2021-09-26 22:31:30 [pid: 37|app: 0|req: 33/33] 10.0.0.2 () {44 vars in 1096 bytes} [Sun Sep 26 21:31:30 2021] GET /channel/ => generated 3715 bytes in 27 msecs (HTTP/1.1 200) 7 headers in 349 bytes (1 switches on core 0)
2021-09-26 22:31:32 {'csrfmiddlewaretoken': ['XXXXXXXXXXXXXXXXXX'], 'subscribe': ['https://www.youtube.com/watch?v=ztpWsuUItrA&t=4s']}
2021-09-26 22:31:34 scrape data from youtube
2021-09-26 22:31:34 [pid: 37|app: 0|req: 34/34] 10.0.0.2 () {52 vars in 1285 bytes} [Sun Sep 26 21:31:32 2021] POST /channel/ => generated 145 bytes in 2133 msecs (HTTP/1.1 500) 6 headers in 184 bytes (1 switches on core 0)
2021-09-26 22:31:34 [pid: 37|app: 0|req: 35/35] 10.0.0.2 () {42 vars in 1004 bytes} [Sun Sep 26 21:31:34 2021] GET /favicon.ico => generated 179 bytes in 2 msecs (HTTP/1.1 404) 5 headers in 158 bytes (1 switches on core 0)

some requirements

issues

  • unwatch (uncheck) video not possible
  • "show / hide subscribed channels" is shown twice, on channel page and config

tubearchivist

  • deleting videos/channels
  • blacklist videos/channels
  • playlists, similar to channel subscriptions

load only information, and imho unique: extended sorting (length,date(s),channel,titel,views,likes) < favorite i'll work on :)

project / community / wiki

  • some docker commands could be useful, or a "how to" keep all up to date
  • for more space/storage, how to mount host volumes instead of docker volumes (and how to migrate)
  • enable discussions (if you agree of course) https://docs.github.com/en/discussions/quickstart

dev / contributors

how to setup a dev environment for tubearchivist :-)

Killing a job

I have a stuck dl whats the best way to delete a dl thats running?

Scanning of subscriptions

Currently, the current channel page size (default=50) is the number of videos scanned from the newest backwards, however if I just started to archive a new channel, it would probably mean that 50 is insufficient. Instead of manually changing the value to something else, is it possible to instead let it scan for the latest 50 not downloaded videos?

ElasticSearch Redis unable to connect

Hi!

Iโ€™m running this image on my unraid box with separate containers for Redis and ElasticSearch since I have other uses for these containers as well. Iโ€™m unable to get past Internal Server Error. Attaching log files further down. Does anything look strange? Reach out if additional information on Unraid is needed/wanted.

`os: Linux-5.10.28-Unraid #1 SMP Wed Apr 7 08:23:18 PDT 2021
nodename: 864dd501bb25
machine: x86_64
clock source: unix
detected number of CPU cores: 8
current working directory: /app
writing pidfile to /tmp/project-master.pid
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

your processes number limit is 191529
your memory page size is 4096 bytes
detected max file descriptor number: 40960
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address :8080 fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

Python version: 3.9.7 (default, Sep 3 2021, 02:02:37) [GCC 10.2.1 20210110]

*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x556780637870
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 145808 bytes (142 KB) for 1 cores
*** Operational MODE: single process ***
Traceback (most recent call last):
File "/app/./config/wsgi.py", line 16, in
application = get_wsgi_application()
File "/usr/local/lib/python3.9/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application
django.setup(set_prefix=False)
File "/usr/local/lib/python3.9/site-packages/django/init.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python3.9/site-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/usr/local/lib/python3.9/site-packages/django/apps/config.py", line 212, in create
mod = import_module(mod_path)
File "/usr/local/lib/python3.9/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/app/./home/init.py", line 7, in
from home.src.index_management import index_check
File "/app/./home/src/index_management.py", line 160, in
class ElasticIndex:
File "/app/./home/src/index_management.py", line 165, in ElasticIndex
CONFIG = AppConfig().config
File "/app/./home/src/config.py", line 18, in init
self.config = self.get_config()
File "/app/./home/src/config.py", line 22, in get_config
config = self.get_config_redis()
File "/app/./home/src/config.py", line 54, in get_config_redis
config = get_message("config")
File "/app/./home/src/helper.py", line 96, in get_message
reply = redis_connection.execute_command("JSON.GET", key)
File "/usr/local/lib/python3.9/site-packages/redis/client.py", line 901, in execute_command
return self.parse_response(conn, command_name, **options)
File "/usr/local/lib/python3.9/site-packages/redis/client.py", line 915, in parse_response
response = connection.read_response()
File "/usr/local/lib/python3.9/site-packages/redis/connection.py", line 756, in read_response
raise response
redis.exceptions.ResponseError: unknown command JSON.GET, with args beginning with: config,
unable to load app 0 (mountpoint='') (callable not found or import error)

*** no app loaded. going in full dynamic mode ***
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***

*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 12)
spawned uWSGI worker 1 (pid: 22, cores: 1)
Usage: celery [OPTIONS] COMMAND [ARGS]...

Error: Invalid value for '-A' / '--app':

Unable to load celery application.
While trying to load the module home.tasks the following error occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/celery/bin/celery.py", line 53, in convert
return find_app(value)
File "/usr/local/lib/python3.9/site-packages/celery/app/utils.py", line 384, in find_app
sym = symbol_by_name(app, imp=imp)
File "/usr/local/lib/python3.9/site-packages/kombu/utils/imports.py", line 56, in symbol_by_name
module = imp(module_name, package=package, **kwargs)
File "/usr/local/lib/python3.9/site-packages/celery/utils/imports.py", line 100, in import_from_cwd
return imp(module, package=package)
File "/usr/local/lib/python3.9/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1030, in _gcd_import
File "", line 1007, in _find_and_load
File "", line 986, in _find_and_load_unlocked
File "", line 680, in _load_unlocked
File "", line 850, in exec_module
File "", line 228, in _call_with_frames_removed
File "/app/home/init.py", line 7, in
from home.src.index_management import index_check
File "/app/home/src/index_management.py", line 160, in
class ElasticIndex:
File "/app/home/src/index_management.py", line 165, in ElasticIndex
CONFIG = AppConfig().config
File "/app/home/src/config.py", line 18, in init
self.config = self.get_config()
File "/app/home/src/config.py", line 22, in get_config
config = self.get_config_redis()
File "/app/home/src/config.py", line 54, in get_config_redis
config = get_message("config")
File "/app/home/src/helper.py", line 96, in get_message
reply = redis_connection.execute_command("JSON.GET", key)
File "/usr/local/lib/python3.9/site-packages/redis/client.py", line 901, in execute_command
return self.parse_response(conn, command_name, **options)
File "/usr/local/lib/python3.9/site-packages/redis/client.py", line 915, in parse_response
response = connection.read_response()
File "/usr/local/lib/python3.9/site-packages/redis/connection.py", line 756, in read_response
raise response
redis.exceptions.ResponseError: unknown command JSON.GET, with args beginning with: config,

--- no python application found, check your startup logs for errors ---
[pid: 22|app: -1|req: -1/1] 10.0.8.2 () {40 vars in 650 bytes} [Sat Oct 2 20:20:37 2021] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
--- no python application found, check your startup logs for errors ---
[pid: 22|app: -1|req: -1/2] 10.0.8.2 () {40 vars in 620 bytes} [Sat Oct 2 20:20:38 2021] GET /favicon.ico => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0) `

[Security] Hardcoded Django Secret Key

There is a hardcoded SECRET_KEY value here, in the settings.py of the project.

From the Django docs regarding SECRET_KEY:

Warning

Keep this value secret.

Running Django with a known SECRET_KEY defeats many of Djangoโ€™s security protections, and can lead to privilege escalation and remote code execution vulnerabilities.

I recommend changing this line to something like:

SECRET_KEY = str(os.getenv("DJANGO_SECRET_KEY"))

Or better yet:

if os.getenv("DJANGO_SECRET_KEY_FILE"):
  open('os.getenv("DJANGO_SECRET_KEY_FILE", "r") as secret_file:
    SECRET_KEY = secret_file.read().strip()
else:
  SECRET_KEY = os.getenv("DJANGO_SECRET_KEY")

Which gives the option to pass the value more securely via Docker or Kubernetes Secrets.

(There are also Django settings packages which can reduce the boilerplate code here, provide this kind of file/env var support and do type casting for any setting in Django. I'm out of the loop on Django development these days or I'd recommend one, but I have seen that they exist.)

Multi-arch images?

Thanks for working on this, it is a promising project!

I was trying to spin-up tubearchivist on a RPi 4 running 64-bit Raspbian OS and while the containers get built fine, I run into the standard_init_linux.go:228: exec user process caused: exec format error for the redis JSON and the tubearchivist containers (the Elasticsearch seems to install fine).

Is there a plan for building multi-arch images especially for arm64?

NAS mounting NFS/CIFS

In my case, I have this running on a virtual server of Ubuntu Server 18.04 LTS and wish to use the VM to just run the tool while the actual content is stored on my NAS.

I can connect to my NAS through NFS or CIFS but prefer NFS for myself.

I have tried mounting the NFS share as /mnt/youtube and then creating a soft link to /youtube but that results in the docker making the media folder not save to the NAS.

I've also tried modifying the docker-compose.yml file to add the NFS mount to the docker image directly and have it mount the NAS path to ./volumes/tubearchivist/media but that also resulted in the content there being saved but not appearing in the NAS.

What can be done?

ElasticsearchException[failed to bind service] with new TA install

Hey, just saw your post on Reddit and thought I'd give it a whirl. Ran into an issue getting ElasticSearch started, not sure why. I thought (based on the "Access Denied" in the logs) it was a permission error, so I already chown -R admin:admin (which is who UID and GID for 1001 is on my system) all the folders that TA uses to run, brought the project down and back up again, no dice.

System Details:

OS: Debian 10
Docker version: 20.10.8, build 3967b7d

docker-compose.yml file:

version: '3.3'

services:
  tubearchivist:
    container_name: tubearchivist
    restart: always
    image: bbilly1/tubearchivist:latest
    ports:
      - 8008:8000
    volumes:
      - /mnt/storage/media/youtube/media:/youtube
      - /mnt/storage/media/youtube/cache:/cache
    environment:
      - ES_URL=http://archivist-es:9200
      - REDIS_HOST=archivist-redis
      - HOST_UID=1001
      - HOST_GID=1001
    depends_on:
      - archivist-es
      - archivist-redis
  archivist-redis:
    image: redislabs/rejson:latest
    container_name: archivist-redis
    restart: always
    ports:
      - 6379:6379
    volumes:
      - /mnt/storage/media/youtube/redis:/data
    depends_on:
      - archivist-es
  archivist-es:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.14.1
    container_name: archivist-es
    restart: always
    environment:
      - "discovery.type=single-node"
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - /mnt/storage/media/youtube/es:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

docker logs -f archivist-es output:

{"type": "server", "timestamp": "2021-09-15T21:42:17,960Z", "level": "ERROR", "component": "o.e.b.ElasticsearchUncaughtExceptionHandler", "cluster.name": "docker-cluster", "node.name": "2928085f95ba", "message": "uncaught exception in thread [main]", 
"stacktrace": ["org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:116) ~[elasticsearch-cli-7.14.1.jar:7.14.1]",
"at org.elasticsearch.cli.Command.main(Command.java:79) ~[elasticsearch-cli-7.14.1.jar:7.14.1]",
"at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:81) ~[elasticsearch-7.14.1.jar:7.14.1]",
"Caused by: org.elasticsearch.ElasticsearchException: failed to bind service",
"at org.elasticsearch.node.Node.<init>(Node.java:798) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.node.Node.<init>(Node.java:281) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:219) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:219) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:399) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.14.1.jar:7.14.1]",
"... 6 more",
"Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes",
"at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]",
"at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) ~[?:?]",
"at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]",
"at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:396) ~[?:?]",
"at java.nio.file.Files.createDirectory(Files.java:694) ~[?:?]",
"at java.nio.file.Files.createAndCheckIsDirectory(Files.java:801) ~[?:?]",
"at java.nio.file.Files.createDirectories(Files.java:787) ~[?:?]",
"at org.elasticsearch.env.NodeEnvironment.lambda$new$0(NodeEnvironment.java:265) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:202) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:262) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.node.Node.<init>(Node.java:376) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.node.Node.<init>(Node.java:281) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:219) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:219) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:399) ~[elasticsearch-7.14.1.jar:7.14.1]",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.14.1.jar:7.14.1]",
"... 6 more"] }
uncaught exception in thread [main]
ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];
Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes
	at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
	at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
	at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
	at java.base/sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:396)
	at java.base/java.nio.file.Files.createDirectory(Files.java:694)
	at java.base/java.nio.file.Files.createAndCheckIsDirectory(Files.java:801)
	at java.base/java.nio.file.Files.createDirectories(Files.java:787)
	at org.elasticsearch.env.NodeEnvironment.lambda$new$0(NodeEnvironment.java:265)
	at org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:202)
	at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:262)
	at org.elasticsearch.node.Node.<init>(Node.java:376)
	at org.elasticsearch.node.Node.<init>(Node.java:281)
	at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:219)
	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:219)
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:399)
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159)
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150)
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75)
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:116)
	at org.elasticsearch.cli.Command.main(Command.java:79)
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115)
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:81)
For complete error details, refer to the log at /usr/share/elasticsearch/logs/docker-cluster.log

Also, I've checked /mnt/storage/media/youtube/es and there's no log file there, sadly. Let me know if you need anymore information.

Files downloading but stopping at change of ownership (chown)

Hi, the system seems to work (last version).
But my NFS mount does not allow me to chown files, only write them.

Download works, saving file in folder too. But system seems to try to change ownership and stops. After that, no other video seems to be downloaded. Is that a good reading or should that work even with that warning ?

[2021-10-18 09:12:48,975: ERROR/ForkPoolWorker-4] Task home.tasks.download_pending[d458a9ea-d598-48e0-b43e-d4b9bfb427a6] raised unexpected: PermissionError(1, 'Operation not permitted'),
[2021-10-18 09:12:47,024: WARNING/ForkPoolWorker-4] get video data for c6C2bh71skQ,
[2021-10-18 09:12:47,025: WARNING/ForkPoolWorker-4] ,
Traceback (most recent call last):,
  File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 450, in trace_task,
    R = retval = fun(*args, **kwargs),
  File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 731, in __protected_call__,
    return self.run(*args, **kwargs),
  File "/app/home/tasks.py", line 58, in download_pending,
    downloader.run_queue(),
  File "/app/home/src/download.py", line 450, in run_queue,
    self.move_to_archive(vid_dict),
  File "/app/home/src/download.py", line 593, in move_to_archive,
    os.chown(new_file_path, host_uid, host_gid),
PermissionError: [Errno 1] Operation not permitted: '/youtube/Tchoupi/20200120_c6C2bh71skQ_Tchoupi a lecole - Papi au temps des dinosaures (S.2 EP.38).mp4'

Thanks a lot

Playlist support in UI

Not sure if this has been asked yet but would a playlist view be to difficult to add in the UI? I have 1 subscribe that posts a few videos a day but uploads them in to seperate playlists... Like when I click on the person I'm subscribed to and it shows all their videos, could there be an "all" button or filter to change and then a "playlist" button? you would then need to click playlist 1 or 2 to display a list of videos in that given playlist? If not no worries.. just something I could really use. I would contribute but I'm no programmer. TIA and great program BTW!

Exposing Port 80 ?

Hi, quick Question:

Why exactly is the Dockerfile exposing Port 80 when the Listener is configured on Port 8080 ?

Best regards

Awesome app. some feedback

Want to say I love this and Its really going the way I want. I archive LOTS of channels so if you need any stress testing please let me know.

A couple ideas from others ive used.

Allow custom naming - I use plex or other plugins that need a specific name
Creat .nfo files for plex/kodi etc.
Save metadata and a thumbnail in the channel folder

Alternative channel layout (grid). My archive has 350-400 channels in it so a single column is rough

But seriously. keep on keeping on. Im pretty excited with this

Seems to break after 15 Subcriptions

This has happened on two different installations on two different servers. I don't know if it's coincidence, but the app seems to break and stop scraping downloads after I add 15 subscriptions. I try to add several and it adds them and they show in subscriptions but nothing downloads no matter what channel I add. Here are the logs. https://snip.lol/JipE3/ROkakuzA63.txt

Sorry to keep making issues, but I think it's important to the development of the app. Maybe it's something simple I'm missing. Let me know if you have any questions for me.

_tubearchivist_logs.txt

Any way to get nfo files for Emby/Plex Metadata?

My main goal is to get the embedded thumbnails to display in Emby. I've come to realize this can't be done without an nfo file? I'm not sure if this can be done or not with tubearchivist. Can it create nfo files for metadata? FYI I do have the embedded metadata option enabled in settings.

Updating instructions

I've been running version 0.0.3 for a few days now and saw that 0.0.4 came out. The readme file now mentions upgrade instructions by checking the docker-compose.yml file.

I did this and then used 'sudo docker-compose down' then updated my copy of the file manually (because I made changes to make it work on my system) then did 'sudo docker-compose up -d' and my ElastidSearch image updated automatically but my TubeArchivist image did not. It still came up as version 0.0.3. I sent it down again and then used 'sudo docker pull bbilly1/tubearchivist:latest' to get the latest one and it worked.

Not sure if it's an issue with my custom compose file, with the way docker works or if the Upgrade section of the Readme just needs updating.

Here's my custom docker-compose.yml file to confirm.

`version: '3.3'

services:
tubearchivist:
container_name: tubearchivist
restart: always
image: bbilly1/tubearchivist:latest
ports:
- 8000:8000
volumes:
- /mnt/YouTube:/youtube
- ./volumes/tubearchivist/cache:/cache
environment:
- ES_URL=http://archivist-es:9200
- REDIS_HOST=archivist-redis
- HOST_UID=1000
- HOST_GID=1000
depends_on:
- archivist-es
- archivist-redis
archivist-redis:
image: redislabs/rejson:latest
container_name: archivist-redis
restart: always
ports:
- 6379:6379
volumes:
- ./volumes/tubearchivist/redis:/data
depends_on:
- archivist-es
archivist-es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
container_name: archivist-es
restart: always
environment:
- "discovery.type=single-node"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./volumes/tubearchivist/es:/home/itamar/elasticsearch/data
ports:
- 9200:9200
`

Manual media import not available

Not really sure what I'm doing wrong here, but the manual import still says "Coming soon". I've tried removing and re-creating the containers, re-downloading the code form GitHub but nothing seems to help. What am I overlooking?

Typos discovered by codespell

codespell --ignore-words-list=nd

./tubearchivist/home/views.py:4: recieved ==> received
./tubearchivist/home/views.py:223: chanel ==> channel
./tubearchivist/home/templates/home/settings.html:12: prefered ==> preferred
./tubearchivist/home/src/download.py:75: downlaoded ==> downloaded
./tubearchivist/home/src/helper.py:111: alterative ==> alternative
./tubearchivist/home/src/index.py:219: signle ==> single, signal
./tubearchivist/home/src/reindex.py:214: missmatch ==> mismatch
./tubearchivist/home/src/reindex.py:299: missmatch ==> mismatch
./tubearchivist/home/src/reindex.py:313: missmatch ==> mismatch
./tubearchivist/home/src/reindex.py:457: missmatch ==> mismatch
./tubearchivist/home/src/searching.py:75: chache ==> cache

Not a directory error when encountering macOS thumbnail files.

Rescanning channels crashes if the download path has been accessed from the macOS Finder, which leaves behind .DS_Store and ._filename thumbnail files, among others.

It should be a pretty simple fix, if you add an exemption for OS temp files from the check.
This gitignore file should be useful for figuring out what files are created - https://github.com/github/gitignore/blob/master/Global/macOS.gitignore

Log below:
[2021-09-24 16:04:05,714: ERROR/ForkPoolWorker-4] Task home.tasks.update_subscribed[e0197cb6-a19f-4f27-b6e4-1481c344944b] raised unexpected: NotADirectoryError(20, 'Not a directory')

Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 450, in trace_task R = retval = fun(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 731, in __protected_call__ return self.run(*args, **kwargs) File "/app/home/tasks.py", line 29, in update_subscribed missing_videos = channel_handler.find_missing() File "/app/home/src/download.py", line 338, in find_missing all_downloaded = pending_handler.get_all_downloaded() File "/app/home/src/download.py", line 221, in get_all_downloaded all_videos = os.listdir(channel_path) NotADirectoryError: [Errno 20] Not a directory: '/youtube/.DS_Store'

Server Not Responding with deploy.sh test [with workaround]

Issue:

When building the local environment using the deploy.sh script, the tubearchivist image does not properly instantiate the web server.

Details:

Using the base build details from the current testing or primary branches and then attempting to start the tubearchivist docker image, the server does not start appropriately. This is because the web service is unable to build the style.css file properly due to missing references (.tff.woff files for fonts). The server supplies a 500 response because there isn't a good home page (or any page, for that matter) to reference.

Expected Result:

Server returns a 200 status response and supplies the appropriate web page response.

Actual Result:

Server returns a 500 status response and supplies no web page with the response.

Workaround Found:

Docker has a copy command, via docker cp, which can copy files locally. Copying the requested files (pulled from an existing, previous version and working container) into the expected location and then restarting the container allows for it to restart the web service properly and supply the Expected Result.

Note:

Attempting to create the directory in the build location before creating the docker image did not produce the Expected Result. Only copying the files into the directory via docker cp worked to actually bring the web server into a responsive state.

Logs and command outputs:

Building with deploy.sh test

building file list ...
114 files to consider
sending incremental file list
docker-compose.yml
          1,098 100%  388.67kB/s    0:00:00 (xfr#1, to-chk=0/1)
Sending build context to Docker daemon  471.6kB
Step 1/19 : FROM python:3.9.7-slim-bullseye
 ---> e455ca30507a
Step 2/19 : ENV PYTHONUNBUFFERED 1
 ---> Using cache
 ---> e3bb53f200c1
Step 3/19 : RUN apt-get clean && apt-get -y update && apt-get -y install --no-install-recommends     build-essential     nginx     curl && rm -rf /var/lib/apt/lists/*
 ---> Using cache
 ---> d8a0c4648191
Step 4/19 : RUN curl -s https://api.github.com/repos/yt-dlp/FFmpeg-Builds/releases/latest     | grep browser_download_url     | grep linux64-nonfree.tar.xz     | cut -d '"' -f 4     | xargs curl -L --output ffmpeg.tar.xz &&     tar -xf ffmpeg.tar.xz --strip-components=2 --no-anchored -C /usr/bin/ "ffmpeg" &&     tar -xf ffmpeg.tar.xz --strip-components=2 --no-anchored -C /usr/bin/ "ffprobe" &&     rm ffmpeg.tar.xz
 ---> Using cache
 ---> d50733852bcb
Step 5/19 : COPY nginx.conf /etc/nginx/conf.d/
 ---> Using cache
 ---> ef6168e041bf
Step 6/19 : RUN mkdir /cache
 ---> Using cache
 ---> 24ff5e3af03c
Step 7/19 : RUN mkdir /youtube
 ---> Using cache
 ---> ce16b112c7a5
Step 8/19 : RUN mkdir /app
 ---> Using cache
 ---> c2c10588e4b3
Step 9/19 : COPY ./tubearchivist/requirements.txt /requirements.txt
 ---> Using cache
 ---> f9384103603b
Step 10/19 : RUN pip install --no-cache-dir -r requirements.txt --src /usr/local/src
 ---> Using cache
 ---> 4dba2dd95a6b
Step 11/19 : COPY ./tubearchivist /app
 ---> Using cache
 ---> 7cae16ac0c75
Step 12/19 : COPY ./run.sh /app
 ---> Using cache
 ---> 5e11c3b6453e
Step 13/19 : COPY ./uwsgi.ini /app
 ---> Using cache
 ---> c5ef45c842eb
Step 14/19 : VOLUME /cache
 ---> Using cache
 ---> c2c2b0302cac
Step 15/19 : VOLUME /youtube
 ---> Using cache
 ---> c3d30c993f20
Step 16/19 : WORKDIR /app
 ---> Using cache
 ---> cd9f6c1f7960
Step 17/19 : EXPOSE 8000
 ---> Using cache
 ---> 5b974a0e9dfc
Step 18/19 : RUN chmod +x ./run.sh
 ---> Using cache
 ---> 43caaee852bd
Step 19/19 : CMD ["./run.sh"]
 ---> Using cache
 ---> d2c750d9fec2
Successfully built d2c750d9fec2
Successfully tagged bbilly1/tubearchivist:latest
archivist-es is up-to-date
archivist-redis is up-to-date
tubearchivist is up-to-date

Logs showing archivist-es is running and working

ubuntu@ubuntu-dev-amd64:~/tubearchivist-build$ docker logs -f archivist-es
{"type": "server", "timestamp": "2021-10-05T21:18:13,432Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "version[7.15.0], pid[8], build[default/docker/79d65f6e357953a5b3cbcc5e2c7c21073d89aa29/2021-09-16T03:05:29.143308416Z], OS[Linux/5.4.0-88-generic/amd64], JVM[Eclipse Foundation/OpenJDK 64-Bit Server VM/16.0.2/16.0.2+7]" }
{"type": "server", "timestamp": "2021-10-05T21:18:13,451Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]" }
{"type": "server", "timestamp": "2021-10-05T21:18:13,453Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-3159410211284329604, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -XX:MaxDirectMemorySize=268435456, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,628Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [aggs-matrix-stats]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,629Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [analysis-common]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,631Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [constant-keyword]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,633Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [frozen-indices]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,635Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [ingest-common]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,644Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [ingest-geoip]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,645Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [ingest-user-agent]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,646Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [kibana]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,647Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [lang-expression]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,647Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [lang-mustache]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,648Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [lang-painless]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,649Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [mapper-extras]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,652Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [mapper-version]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,654Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [parent-join]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,655Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [percolator]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,656Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [rank-eval]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,658Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [reindex]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,658Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [repositories-metering-api]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,659Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [repository-encrypted]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,661Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [repository-url]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,663Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [runtime-fields-common]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,666Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [search-business-rules]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,667Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [searchable-snapshots]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,668Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [snapshot-repo-test-kit]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,669Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [spatial]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,670Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [transform]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,672Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [transport-netty4]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,673Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [unsigned-long]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,674Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [vector-tile]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,675Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [vectors]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,677Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [wildcard]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,678Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-aggregate-metric]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,679Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-analytics]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,681Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-async]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,682Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-async-search]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,684Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-autoscaling]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,685Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-ccr]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,687Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-core]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,688Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-data-streams]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,690Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-deprecation]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,691Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-enrich]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,692Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-eql]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,694Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-fleet]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,696Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-graph]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,698Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-identity-provider]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,699Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-ilm]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,700Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-logstash]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,702Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-ml]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,704Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-monitoring]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,706Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-ql]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,707Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-rollup]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,708Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-security]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,709Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-shutdown]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,711Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-sql]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,715Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-stack]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,717Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-text-structure]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,721Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-voting-only-node]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,722Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "loaded module [x-pack-watcher]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,730Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "no plugins loaded" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,923Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/ubuntu--vg-ubuntu--lv)]], net usable_space [13.4gb], net total_space [23.9gb], types [ext4]" }
{"type": "server", "timestamp": "2021-10-05T21:18:24,924Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "heap size [512mb], compressed ordinary object pointers [true]" }
{"type": "server", "timestamp": "2021-10-05T21:18:25,022Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "node name [1f8e61b3fb75], node ID [0cVnt5KFTCu9dyZTyx9xxA], cluster name [docker-cluster], roles [transform, data_frozen, master, remote_cluster_client, data, ml, data_content, data_hot, data_warm, data_cold, ingest]" }
{"type": "server", "timestamp": "2021-10-05T21:18:50,172Z", "level": "INFO", "component": "o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "[controller/209] [Main.cc@122] controller (64 bit): Version 7.15.0 (Build d0ab43b6c551f8) Copyright (c) 2021 Elasticsearch BV" }
{"type": "server", "timestamp": "2021-10-05T21:18:51,655Z", "level": "INFO", "component": "o.e.x.s.a.s.FileRolesStore", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]" }
{"type": "server", "timestamp": "2021-10-05T21:18:54,181Z", "level": "INFO", "component": "o.e.i.g.LocalDatabases", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "initialized default databases [[GeoLite2-Country.mmdb, GeoLite2-City.mmdb, GeoLite2-ASN.mmdb]], config databases [[]] and watching [/usr/share/elasticsearch/config/ingest-geoip] for changes" }
{"type": "server", "timestamp": "2021-10-05T21:18:54,187Z", "level": "INFO", "component": "o.e.i.g.DatabaseRegistry", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "initialized database registry, using geoip-databases directory [/tmp/elasticsearch-3159410211284329604/geoip-databases/0cVnt5KFTCu9dyZTyx9xxA]" }
{"type": "server", "timestamp": "2021-10-05T21:18:58,222Z", "level": "INFO", "component": "o.e.t.NettyAllocator", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=1mb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=4mb, heap_size=512mb}]" }
{"type": "server", "timestamp": "2021-10-05T21:18:58,702Z", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "using discovery type [single-node] and seed hosts providers [settings]" }
{"type": "server", "timestamp": "2021-10-05T21:19:02,273Z", "level": "INFO", "component": "o.e.g.DanglingIndicesState", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually" }
{"type": "server", "timestamp": "2021-10-05T21:19:05,567Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "initialized" }
{"type": "server", "timestamp": "2021-10-05T21:19:05,571Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "starting ..." }
{"type": "server", "timestamp": "2021-10-05T21:19:06,631Z", "level": "INFO", "component": "o.e.x.s.c.f.PersistentCache", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "persistent cache index loaded" }
{"type": "server", "timestamp": "2021-10-05T21:19:07,161Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "publish_address {172.24.0.2:9300}, bound_addresses {0.0.0.0:9300}" }
{"type": "server", "timestamp": "2021-10-05T21:19:10,834Z", "level": "WARN", "component": "o.e.b.BootstrapChecks", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]" }
{"type": "server", "timestamp": "2021-10-05T21:19:10,875Z", "level": "INFO", "component": "o.e.c.c.Coordinator", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "setting initial configuration to VotingConfiguration{0cVnt5KFTCu9dyZTyx9xxA}" }
{"type": "server", "timestamp": "2021-10-05T21:19:13,366Z", "level": "INFO", "component": "o.e.c.s.MasterService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "elected-as-master ([1] nodes joined)[{1f8e61b3fb75}{0cVnt5KFTCu9dyZTyx9xxA}{GCjWF8ZcT-CVDKYXpxDpfQ}{172.24.0.2}{172.24.0.2:9300}{cdfhilmrstw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{1f8e61b3fb75}{0cVnt5KFTCu9dyZTyx9xxA}{GCjWF8ZcT-CVDKYXpxDpfQ}{172.24.0.2}{172.24.0.2:9300}{cdfhilmrstw}]}" }
{"type": "server", "timestamp": "2021-10-05T21:19:13,952Z", "level": "INFO", "component": "o.e.c.c.CoordinationState", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "cluster UUID set to [o4COgb3RTwOsisd4j6DiZg]" }
{"type": "server", "timestamp": "2021-10-05T21:19:14,429Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "master node changed {previous [], current [{1f8e61b3fb75}{0cVnt5KFTCu9dyZTyx9xxA}{GCjWF8ZcT-CVDKYXpxDpfQ}{172.24.0.2}{172.24.0.2:9300}{cdfhilmrstw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}" }
{"type": "server", "timestamp": "2021-10-05T21:19:14,722Z", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "publish_address {172.24.0.2:9200}, bound_addresses {0.0.0.0:9200}", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:14,724Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "started", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:15,772Z", "level": "INFO", "component": "o.e.g.GatewayService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "recovered [0] indices into cluster_state", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:16,816Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index template [.ml-stats] for index patterns [.ml-stats-*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:17,487Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding component template [data-streams-mappings]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:17,969Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index template [.ml-state] for index patterns [.ml-state*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:18,663Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding component template [logs-mappings]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:19,553Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index template [.ml-anomalies-] for index patterns [.ml-anomalies-*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:20,167Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index template [.ml-notifications-000002] for index patterns [.ml-notifications-000002]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:20,695Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding component template [logs-settings]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:21,359Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding component template [synthetics-settings]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:21,896Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding component template [synthetics-mappings]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:22,353Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding component template [metrics-mappings]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:22,776Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding component template [metrics-settings]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:23,647Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index template [.watch-history-13] for index patterns [.watcher-history-13*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:24,093Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index template [ilm-history] for index patterns [ilm-history-5*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:24,503Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index template [.slm-history] for index patterns [.slm-history-5*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:24,941Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding component template [.deprecation-indexing-mappings]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:25,521Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding component template [.deprecation-indexing-settings]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:25,887Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding template [.monitoring-alerts-7] for index patterns [.monitoring-alerts-7]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:26,506Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding template [.monitoring-es] for index patterns [.monitoring-es-7-*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:26,994Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-7-*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:27,605Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-7-*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:28,045Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding template [.monitoring-beats] for index patterns [.monitoring-beats-7-*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:28,597Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index template [logs] for index patterns [logs-*-*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:28,983Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index template [synthetics] for index patterns [synthetics-*-*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:29,426Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "[ta_channel] creating index, cause [api], templates [], shards [1]/[0]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:30,620Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index template [metrics] for index patterns [metrics-*-*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:31,374Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index template [.deprecation-indexing-template] for index patterns [.logs-deprecation.*]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:32,104Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[ta_channel][0]]]).", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:32,680Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index lifecycle policy [ml-size-based-ilm-policy]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:33,307Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "[ta_video] creating index, cause [api], templates [], shards [1]/[0]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:34,077Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index lifecycle policy [logs]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:34,782Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index lifecycle policy [metrics]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:35,291Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[ta_video][0]]]).", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:35,736Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index lifecycle policy [synthetics]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:37,868Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "[ta_download] creating index, cause [api], templates [], shards [1]/[0]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:38,757Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index lifecycle policy [watch-history-ilm-policy]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:40,132Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index lifecycle policy [ilm-history-ilm-policy]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:40,810Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[ta_download][0]]]).", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:41,583Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index lifecycle policy [slm-history-ilm-policy]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:42,413Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index lifecycle policy [.deprecation-indexing-ilm-policy]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:43,825Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "adding index lifecycle policy [.fleet-actions-results-ilm-policy]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:45,781Z", "level": "ERROR", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "failed to set monitoring pipeline [xpack_monitoring_7]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA" ,
"stacktrace": ["org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (put-pipeline-xpack_monitoring_7) within 30s",
"at org.elasticsearch.cluster.service.MasterService$Batcher.lambda$onTimeout$0(MasterService.java:147) [elasticsearch-7.15.0.jar:7.15.0]",
"at java.util.ArrayList.forEach(ArrayList.java:1511) [?:?]",
"at org.elasticsearch.cluster.service.MasterService$Batcher.lambda$onTimeout$1(MasterService.java:146) [elasticsearch-7.15.0.jar:7.15.0]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.15.0.jar:7.15.0]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:831) [?:?]"] }
{"type": "server", "timestamp": "2021-10-05T21:19:45,913Z", "level": "INFO", "component": "o.e.i.g.GeoIpDownloader", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "updating geoip databases", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:45,915Z", "level": "INFO", "component": "o.e.i.g.GeoIpDownloader", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "fetching geoip databases overview from [https://geoip.elastic.co/v1/database?elastic_geoip_service_tos=agree]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:46,697Z", "level": "INFO", "component": "o.e.i.g.GeoIpDownloader", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "updating geoip database [GeoLite2-ASN.mmdb]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:46,877Z", "level": "ERROR", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "failed to set monitoring pipeline [xpack_monitoring_6]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA" ,
"stacktrace": ["org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (put-pipeline-xpack_monitoring_6) within 30s",
"at org.elasticsearch.cluster.service.MasterService$Batcher.lambda$onTimeout$0(MasterService.java:147) [elasticsearch-7.15.0.jar:7.15.0]",
"at java.util.ArrayList.forEach(ArrayList.java:1511) [?:?]",
"at org.elasticsearch.cluster.service.MasterService$Batcher.lambda$onTimeout$1(MasterService.java:146) [elasticsearch-7.15.0.jar:7.15.0]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.15.0.jar:7.15.0]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
"at java.lang.Thread.run(Thread.java:831) [?:?]"] }
{"type": "server", "timestamp": "2021-10-05T21:19:47,581Z", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "license [ef60de8b-2797-417c-91bb-a76658a86b7b] mode [basic] - valid", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:47,597Z", "level": "INFO", "component": "o.e.x.s.s.SecurityStatusChangeListener", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "Active license is now [BASIC]; Security is disabled", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:47,599Z", "level": "WARN", "component": "o.e.x.s.s.SecurityStatusChangeListener", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.15/security-minimal-setup.html to enable security.", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "deprecation.elasticsearch", "timestamp": "2021-10-05T21:19:47,603Z", "level": "DEPRECATION", "component": "o.e.d.x.s.s.SecurityStatusChangeListener", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "The default behavior of disabling security on basic licenses is deprecated. In a later version of Elasticsearch, the value of [xpack.security.enabled] will default to \"true\" , regardless of the license level. See https://www.elastic.co/guide/en/elasticsearch/reference/7.15/security-minimal-setup.html to enable security, or explicitly disable security by setting [xpack.security.enabled] to false in elasticsearch.yml", "key": "security_implicitly_disabled", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:47,976Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "[.geoip_databases] creating index, cause [auto(bulk api)], templates [], shards [1]/[0]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:51,234Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.geoip_databases][0]]]).", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:56,510Z", "level": "INFO", "component": "o.e.i.g.DatabaseRegistry", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "downloading geoip database [GeoLite2-ASN.mmdb] to [/tmp/elasticsearch-3159410211284329604/geoip-databases/0cVnt5KFTCu9dyZTyx9xxA/GeoLite2-ASN.mmdb.tmp.gz]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:56,731Z", "level": "INFO", "component": "o.e.i.g.GeoIpDownloader", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "updated geoip database [GeoLite2-ASN.mmdb]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:56,987Z", "level": "INFO", "component": "o.e.i.g.GeoIpDownloader", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "updating geoip database [GeoLite2-City.mmdb]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:19:59,350Z", "level": "INFO", "component": "o.e.i.g.DatabaseRegistry", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "successfully reloaded changed geoip database file [/tmp/elasticsearch-3159410211284329604/geoip-databases/0cVnt5KFTCu9dyZTyx9xxA/GeoLite2-ASN.mmdb]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:20:23,599Z", "level": "INFO", "component": "o.e.i.g.DatabaseRegistry", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "downloading geoip database [GeoLite2-City.mmdb] to [/tmp/elasticsearch-3159410211284329604/geoip-databases/0cVnt5KFTCu9dyZTyx9xxA/GeoLite2-City.mmdb.tmp.gz]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:20:23,753Z", "level": "INFO", "component": "o.e.i.g.GeoIpDownloader", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "updated geoip database [GeoLite2-City.mmdb]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:20:23,763Z", "level": "INFO", "component": "o.e.i.g.GeoIpDownloader", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "updating geoip database [GeoLite2-Country.mmdb]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:20:27,965Z", "level": "INFO", "component": "o.e.i.g.DatabaseRegistry", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "successfully reloaded changed geoip database file [/tmp/elasticsearch-3159410211284329604/geoip-databases/0cVnt5KFTCu9dyZTyx9xxA/GeoLite2-City.mmdb]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:20:44,016Z", "level": "INFO", "component": "o.e.i.g.DatabaseRegistry", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "downloading geoip database [GeoLite2-Country.mmdb] to [/tmp/elasticsearch-3159410211284329604/geoip-databases/0cVnt5KFTCu9dyZTyx9xxA/GeoLite2-Country.mmdb.tmp.gz]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:20:44,064Z", "level": "INFO", "component": "o.e.i.g.GeoIpDownloader", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "updated geoip database [GeoLite2-Country.mmdb]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }
{"type": "server", "timestamp": "2021-10-05T21:20:44,359Z", "level": "INFO", "component": "o.e.i.g.DatabaseRegistry", "cluster.name": "docker-cluster", "node.name": "1f8e61b3fb75", "message": "successfully reloaded changed geoip database file [/tmp/elasticsearch-3159410211284329604/geoip-databases/0cVnt5KFTCu9dyZTyx9xxA/GeoLite2-Country.mmdb]", "cluster.uuid": "o4COgb3RTwOsisd4j6DiZg", "node.id": "0cVnt5KFTCu9dyZTyx9xxA"  }

Logs showing archivist-redis is running and working

ubuntu@ubuntu-dev-amd64:~/tubearchivist-build$ docker logs -f archivist-redis
1:C 05 Oct 2021 21:17:59.423 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 05 Oct 2021 21:17:59.424 # Redis version=6.2.5, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 05 Oct 2021 21:17:59.424 # Configuration loaded
1:M 05 Oct 2021 21:17:59.429 * monotonic clock: POSIX clock_gettime
1:M 05 Oct 2021 21:17:59.440 * Running mode=standalone, port=6379.
1:M 05 Oct 2021 21:17:59.441 # Server initialized
1:M 05 Oct 2021 21:17:59.441 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 05 Oct 2021 21:17:59.441 # <ReJSON> JSON data type for Redis v1.0.8 [encver 0]
1:M 05 Oct 2021 21:17:59.442 * Module 'ReJSON' loaded from /usr/lib/redis/modules/rejson.so
1:M 05 Oct 2021 21:17:59.443 * Ready to accept connections

Logs showing that tubearchivist does not start appropriately. Error is copied below.

ubuntu@ubuntu-dev-amd64:~/tubearchivist-build$ docker logs -f tubearchivist
waiting for elastic search to start
waiting for elastic search to start
waiting for elastic search to start
waiting for elastic search to start
waiting for elastic search to start
waiting for elastic search to start
waiting for elastic search to start
waiting for elastic search to start
waiting for elastic search to start
waiting for elastic search to start
waiting for elastic search to start
waiting for elastic search to start
failed to connect to elastic search, exiting...
waiting for elastic search to start
waiting for elastic search to start
{
  "name" : "1f8e61b3fb75",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "o4COgb3RTwOsisd4j6DiZg",
  "version" : {
    "number" : "7.15.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "79d65f6e357953a5b3cbcc5e2c7c21073d89aa29",
    "build_date" : "2021-09-16T03:05:29.143308416Z",
    "build_snapshot" : false,
    "lucene_version" : "8.9.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
create new blank index with name ta_channel...
create new blank index with name ta_video...
create new blank index with name ta_download...
sync redis
Operations to perform:
  Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying admin.0003_logentry_add_action_flag_choices... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying auth.0009_alter_user_last_name_max_length... OK
  Applying auth.0010_alter_group_name_max_length... OK
  Applying auth.0011_update_proxy_permissions... OK
  Applying auth.0012_alter_user_first_name_max_length... OK
  Applying sessions.0001_initial... OK
ta_channel index is created and up to date...
ta_video index is created and up to date...
ta_download index is created and up to date...
sync redis
Post-processing 'css/style.css' failed!

Traceback (most recent call last):
  File "/app/manage.py", line 23, in <module>
    main()
  File "/app/manage.py", line 19, in main
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
    output = self.handle(*args, **options)
  File "/usr/local/lib/python3.9/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 187, in handle
    collected = self.collect()
  File "/usr/local/lib/python3.9/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 134, in collect
    raise processed
whitenoise.storage.MissingFileError: The file 'font/Sen-Bold.ttf.woff' could not be found with <whitenoise.storage.CompressedManifestStaticFilesStorage object at 0x7f720b5eafd0>.

The CSS file 'css/style.css' references a file which could not be found:
  font/Sen-Bold.ttf.woff

Please check the URL references in this CSS file, particularly any
relative paths which might be pointing to the wrong location.

[uWSGI] getting INI configuration from uwsgi.ini
*** Starting uWSGI 2.0.19.1 (64bit) on [Tue Oct  5 21:19:55 2021] ***
compiled with version: 10.2.1 20210110 on 05 October 2021 21:13:25
os: Linux-5.4.0-88-generic #99-Ubuntu SMP Thu Sep 23 17:29:00 UTC 2021
nodename: 283f1e8b559e
machine: x86_64
clock source: unix
detected number of CPU cores: 2
current working directory: /app
writing pidfile to /tmp/project-master.pid
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address :8080 fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Python version: 3.9.7 (default, Sep 28 2021, 18:41:28)  [GCC 10.2.1 20210110]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x55d547c95ff0
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 145808 bytes (142 KB) for 1 cores
*** Operational MODE: single process ***
ta_channel index is created and up to date...
ta_video index is created and up to date...
ta_download index is created and up to date...
sync redis
/usr/local/lib/python3.9/site-packages/celery/platforms.py:834: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!

Please specify a different user using the --uid option.

User information: uid=0 euid=0 gid=0 egid=0

  warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
WSGI app 0 (mountpoint='') ready in 10 seconds on interpreter 0x55d547c95ff0 pid: 21 (default app)
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 21)
spawned uWSGI worker 1 (pid: 27, cores: 1)

 -------------- celery@283f1e8b559e v5.1.2 (sun-harmonics)
--- ***** -----
-- ******* ---- Linux-5.4.0-88-generic-x86_64-with-glibc2.31 2021-10-05 16:20:05
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app:         tasks:0x7f0543aebee0
- ** ---------- .> transport:   redis://archivist-redis:6379//
- ** ---------- .> results:     disabled://
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery


[tasks]
  . home.tasks.check_reindex
  . home.tasks.download_pending
  . home.tasks.download_single
  . home.tasks.extrac_dl
  . home.tasks.run_backup
  . home.tasks.run_manual_import
  . home.tasks.run_restore_backup
  . home.tasks.update_subscribed

[2021-10-05 16:20:06,189: INFO/MainProcess] Connected to redis://archivist-redis:6379//
[2021-10-05 16:20:06,217: INFO/MainProcess] mingle: searching for neighbors
[2021-10-05 16:20:07,296: INFO/MainProcess] mingle: all alone
[2021-10-05 16:20:07,445: INFO/MainProcess] celery@283f1e8b559e ready.
[pid: 27|app: 0|req: 1/1] 192.168.0.150 () {42 vars in 753 bytes} [Tue Oct  5 21:22:18 2021] GET / => generated 145 bytes in 547 msecs (HTTP/1.1 500) 6 headers in 184 bytes (1 switches on core 0)

Specific error message:

Post-processing 'css/style.css' failed!


Traceback (most recent call last):
  File "/app/manage.py", line 23, in <module>
    main()
  File "/app/manage.py", line 19, in main
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
    output = self.handle(*args, **options)
  File "/usr/local/lib/python3.9/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 187, in handle
    collected = self.collect()
  File "/usr/local/lib/python3.9/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 134, in collect
    raise processed
whitenoise.storage.MissingFileError: The file 'font/Sen-Bold.ttf.woff' could not be found with <whitenoise.storage.CompressedManifestStaticFilesStorage object at 0x7f720b5eafd0>.

The CSS file 'css/style.css' references a file which could not be found:
  font/Sen-Bold.ttf.woff

Please check the URL references in this CSS file, particularly any
relative paths which might be pointing to the wrong location.

docker exec within the container showing that the font directory is missing from the relative location.

ubuntu@ubuntu-dev-amd64:~/tubearchivist-build$ docker exec -it tubearchivist '/bin/bash'
root@283f1e8b559e:/app# ls
config  db.sqlite3  home  manage.py  requirements.txt  run.sh  static  staticfiles  testing.sh  uwsgi.ini
root@283f1e8b559e:/app# which ls
/bin/ls
root@283f1e8b559e:/app# cd static
root@283f1e8b559e:/app/static# ls
css  favicon.ico  img  progress.js  script.js
root@283f1e8b559e:/app/static# ls css/
dark.css  light.css  style.css
root@283f1e8b559e:/app/static# cd ../staticfiles/
root@283f1e8b559e:/app/staticfiles# ls
admin  css  favicon.ico  img  progress.js  script.js
root@283f1e8b559e:/app/staticfiles# ls css/
dark.934e9cd71cc4.css  dark.css  light.css  style.css

docker exec within the container showing the .css reference that failed to find the font files.

root@283f1e8b559e:/app/static# grep -C5 '.ttf.woff' css/style.css
@font-face {
font-family: 'Sen-Bold';
    src:  url('../font/Sen-Bold.ttf.woff');
    font-family: 'Sen-Bold';
}

@font-face {
font-family: 'Sen-Regular';
    src:  url('../font/Sen-Regular.ttf.woff');
    font-family: 'Sen-Regular';
}

* {
    margin: 0;

Commands showing the Workaround

ubuntu@ubuntu-dev-amd64:~$ ls
docker  Dockerfile  font  tubearchivist  tubearchivist-build
ubuntu@ubuntu-dev-amd64:~$ docker cp font tubearchivist:/app/static/font
ubuntu@ubuntu-dev-amd64:~$ docker cp font tubearchivist:/app/staticfiles/font
ubuntu@ubuntu-dev-amd64:~$ docker exec -it tubearchivist '/bin/bash'
root@283f1e8b559e:/app# cd static
root@283f1e8b559e:/app/static# ls
css  favicon.ico  font  img  progress.js  script.js
root@283f1e8b559e:/app/static# ls font/
Sen-Bold.ttf.woff  Sen-Regular.ttf.woff
root@283f1e8b559e:/app/static# cd ..
root@283f1e8b559e:/app# cd staticfiles/
root@283f1e8b559e:/app/staticfiles# ls
admin  css  favicon.ico  font  img  progress.js  script.js
root@283f1e8b559e:/app/staticfiles# ls font/
Sen-Bold.ttf.woff  Sen-Regular.ttf.woff
root@283f1e8b559e:/app/staticfiles#

Showing that tubearchivist provides Expected Response after Workaround

Note: Log snippet is redacted to exclude previously provided log snippet, since docker container is restarted, not redeployed.
ubuntu@ubuntu-dev-amd64:~$ docker restart tubearchivist
tubearchivist
ubuntu@ubuntu-dev-amd64:~$ docker logs -f tubearchivist
[...]
{
  "name" : "1f8e61b3fb75",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "o4COgb3RTwOsisd4j6DiZg",
  "version" : {
    "number" : "7.15.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "79d65f6e357953a5b3cbcc5e2c7c21073d89aa29",
    "build_date" : "2021-09-16T03:05:29.143308416Z",
    "build_snapshot" : false,
    "lucene_version" : "8.9.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
ta_channel index is created and up to date...
ta_video index is created and up to date...
ta_download index is created and up to date...
sync redis
Operations to perform:
  Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
  No migrations to apply.
ta_channel index is created and up to date...
ta_video index is created and up to date...
ta_download index is created and up to date...
sync redis
Deleting 'script.js'
Deleting 'favicon.ico'
Deleting 'progress.js'
Deleting 'admin/fonts/Roboto-Bold-webfont.50d75e48e0a3.woff'
Deleting 'admin/fonts/README.txt'
Deleting 'admin/fonts/LICENSE.txt'
Deleting 'admin/fonts/Roboto-Bold-webfont.woff'
Deleting 'admin/fonts/Roboto-Light-webfont.c73eb1ceba33.woff'
Deleting 'admin/fonts/Roboto-Light-webfont.woff'
Deleting 'admin/fonts/Roboto-Regular-webfont.woff'
Deleting 'admin/fonts/Roboto-Regular-webfont.35b07eb2f871.woff'
Deleting 'admin/fonts/LICENSE.d273d63619c9.txt'
Deleting 'admin/fonts/README.ab99e6b541ea.txt'
Deleting 'admin/css/changelists.c70d77c47e69.css'
Deleting 'admin/css/fonts.168bab448fee.css'
Deleting 'admin/css/responsive.css'
Deleting 'admin/css/nav_sidebar.css'
Deleting 'admin/css/login.c35adf41bb6e.css'
Deleting 'admin/css/forms.647cb5f1dee9.css'
Deleting 'admin/css/responsive.b128bdf0edef.css'
Deleting 'admin/css/autocomplete.4a81fc4242d0.css'
Deleting 'admin/css/responsive_rtl.css'
Deleting 'admin/css/forms.css'
Deleting 'admin/css/changelists.css'
Deleting 'admin/css/base.1f418065fc2c.css'
Deleting 'admin/css/rtl.4bc23eb90919.css'
Deleting 'admin/css/widgets.css'
Deleting 'admin/css/login.css'
Deleting 'admin/css/widgets.694d845b2cb1.css'
Deleting 'admin/css/nav_sidebar.0fd434145f4d.css'
Deleting 'admin/css/autocomplete.css'
Deleting 'admin/css/responsive_rtl.e13ae754cceb.css'
Deleting 'admin/css/rtl.css'
Deleting 'admin/css/fonts.css'
Deleting 'admin/css/dashboard.be83f13e4369.css'
Deleting 'admin/css/dashboard.css'
Deleting 'admin/css/base.css'
Deleting 'admin/css/vendor/select2/select2.css'
Deleting 'admin/css/vendor/select2/LICENSE-SELECT2.f94142512c91.md'
Deleting 'admin/css/vendor/select2/select2.min.css'
Deleting 'admin/css/vendor/select2/LICENSE-SELECT2.md'
Deleting 'admin/css/vendor/select2/select2.a2194c262648.css'
Deleting 'admin/css/vendor/select2/select2.min.9f54e6414f87.css'
Deleting 'admin/img/icon-viewlink.41eb31f7826e.svg'
Deleting 'admin/img/inline-delete.svg'
Deleting 'admin/img/icon-viewlink.svg'
Deleting 'admin/img/LICENSE'
Deleting 'admin/img/README.txt'
Deleting 'admin/img/icon-unknown.svg'
Deleting 'admin/img/icon-deletelink.svg'
Deleting 'admin/img/sorting-icons.3a097b59f104.svg'
Deleting 'admin/img/icon-unknown.a18cb4398978.svg'
Deleting 'admin/img/icon-no.439e821418cd.svg'
Deleting 'admin/img/selector-icons.svg'
Deleting 'admin/img/icon-alert.svg'
Deleting 'admin/img/icon-yes.svg'
Deleting 'admin/img/icon-calendar.svg'
Deleting 'admin/img/icon-clock.svg'
Deleting 'admin/img/calendar-icons.39b290681a8b.svg'
Deleting 'admin/img/selector-icons.b4555096cea2.svg'
Deleting 'admin/img/icon-unknown-alt.svg'
Deleting 'admin/img/icon-unknown-alt.81536e128bb6.svg'
Deleting 'admin/img/icon-addlink.d519b3bab011.svg'
Deleting 'admin/img/icon-addlink.svg'
Deleting 'admin/img/tooltag-arrowright.svg'
Deleting 'admin/img/search.7cf54ff789c6.svg'
Deleting 'admin/img/README.a70711a38d87.txt'
Deleting 'admin/img/icon-clock.e1d4dfac3f2b.svg'
Deleting 'admin/img/tooltag-add.svg'
Deleting 'admin/img/calendar-icons.svg'
Deleting 'admin/img/icon-changelink.svg'
Deleting 'admin/img/LICENSE.2c54f4e1ca1c'
Deleting 'admin/img/icon-alert.034cc7d8a67f.svg'
Deleting 'admin/img/icon-yes.d2f9f035226a.svg'
Deleting 'admin/img/icon-deletelink.564ef9dc3854.svg'
Deleting 'admin/img/icon-calendar.ac7aea671bea.svg'
Deleting 'admin/img/tooltag-add.e59d620a9742.svg'
Deleting 'admin/img/search.svg'
Deleting 'admin/img/icon-no.svg'
Deleting 'admin/img/sorting-icons.svg'
Deleting 'admin/img/inline-delete.fec1b761f254.svg'
Deleting 'admin/img/icon-changelink.18d2fd706348.svg'
Deleting 'admin/img/tooltag-arrowright.bbfb788a849e.svg'
Deleting 'admin/img/gis/move_vertex_on.svg'
Deleting 'admin/img/gis/move_vertex_on.0047eba25b67.svg'
Deleting 'admin/img/gis/move_vertex_off.7a23bf31ef8a.svg'
Deleting 'admin/img/gis/move_vertex_off.svg'
Deleting 'admin/js/prepopulate.js'
Deleting 'admin/js/popup_response.js'
Deleting 'admin/js/cancel.js'
Deleting 'admin/js/SelectFilter2.js'
Deleting 'admin/js/autocomplete.js'
Deleting 'admin/js/collapse.f84e7410290f.js'
Deleting 'admin/js/nav_sidebar.7605597ddf52.js'
Deleting 'admin/js/actions.a6d23e8853fd.js'
Deleting 'admin/js/calendar.js'
Deleting 'admin/js/popup_response.c6cc78ea5551.js'
Deleting 'admin/js/urlify.js'
Deleting 'admin/js/prepopulate_init.js'
Deleting 'admin/js/cancel.ecc4c5ca7b32.js'
Deleting 'admin/js/change_form.js'
Deleting 'admin/js/jquery.init.b7781a0897fc.js'
Deleting 'admin/js/jquery.init.js'
Deleting 'admin/js/prepopulate_init.e056047b7a7e.js'
Deleting 'admin/js/core.ccd84108ec57.js'
Deleting 'admin/js/inlines.7596b7fd289e.js'
Deleting 'admin/js/actions.js'
Deleting 'admin/js/nav_sidebar.js'
Deleting 'admin/js/urlify.25cc3eac8123.js'
Deleting 'admin/js/prepopulate.bd2361dfd64d.js'
Deleting 'admin/js/autocomplete.b6b77d0e5906.js'
Deleting 'admin/js/SelectBox.8161741c7647.js'
Deleting 'admin/js/collapse.js'
Deleting 'admin/js/inlines.js'
Deleting 'admin/js/SelectFilter2.d250dcb52a9a.js'
Deleting 'admin/js/change_form.9d8ca4f96b75.js'
Deleting 'admin/js/SelectBox.js'
Deleting 'admin/js/core.js'
Deleting 'admin/js/calendar.f8a5d055eb33.js'
Deleting 'admin/js/admin/RelatedObjectLookups.b4d76b6aaf0b.js'
Deleting 'admin/js/admin/RelatedObjectLookups.js'
Deleting 'admin/js/admin/DateTimeShortcuts.js'
Deleting 'admin/js/admin/DateTimeShortcuts.5548f99471bf.js'
Deleting 'admin/js/vendor/select2/select2.full.min.js'
Deleting 'admin/js/vendor/select2/select2.full.min.fcd7500d8e13.js'
Deleting 'admin/js/vendor/select2/LICENSE.md'
Deleting 'admin/js/vendor/select2/LICENSE.f94142512c91.md'
Deleting 'admin/js/vendor/select2/select2.full.js'
Deleting 'admin/js/vendor/select2/select2.full.c2afdeda3058.js'
Deleting 'admin/js/vendor/select2/i18n/af.4f6fcd73488c.js'
Deleting 'admin/js/vendor/select2/i18n/zh-CN.js'
Deleting 'admin/js/vendor/select2/i18n/bn.6d42b4dd5665.js'
Deleting 'admin/js/vendor/select2/i18n/hr.js'
Deleting 'admin/js/vendor/select2/i18n/ja.170ae885d74f.js'
Deleting 'admin/js/vendor/select2/i18n/hy.js'
Deleting 'admin/js/vendor/select2/i18n/pl.6031b4f16452.js'
Deleting 'admin/js/vendor/select2/i18n/az.js'
Deleting 'admin/js/vendor/select2/i18n/vi.097a5b75b3e1.js'
Deleting 'admin/js/vendor/select2/i18n/ro.js'
Deleting 'admin/js/vendor/select2/i18n/nl.js'
Deleting 'admin/js/vendor/select2/i18n/nb.da2fce143f27.js'
Deleting 'admin/js/vendor/select2/i18n/ps.js'
Deleting 'admin/js/vendor/select2/i18n/cs.js'
Deleting 'admin/js/vendor/select2/i18n/lt.js'
Deleting 'admin/js/vendor/select2/i18n/km.js'
Deleting 'admin/js/vendor/select2/i18n/dsb.56372c92d2f1.js'
Deleting 'admin/js/vendor/select2/i18n/eu.adfe5c97b72c.js'
Deleting 'admin/js/vendor/select2/i18n/ar.65aa8e36bf5d.js'
Deleting 'admin/js/vendor/select2/i18n/fr.js'
Deleting 'admin/js/vendor/select2/i18n/ro.f75cb460ec3b.js'
Deleting 'admin/js/vendor/select2/i18n/fa.js'
Deleting 'admin/js/vendor/select2/i18n/ps.38dfa47af9e0.js'
Deleting 'admin/js/vendor/select2/i18n/sv.7a9c2f71e777.js'
Deleting 'admin/js/vendor/select2/i18n/gl.js'
Deleting 'admin/js/vendor/select2/i18n/sl.131a78bc0752.js'
Deleting 'admin/js/vendor/select2/i18n/sk.js'
Deleting 'admin/js/vendor/select2/i18n/ja.js'
Deleting 'admin/js/vendor/select2/i18n/sq.js'
Deleting 'admin/js/vendor/select2/i18n/fa.3b5bd1961cfd.js'
Deleting 'admin/js/vendor/select2/i18n/ko.e7be6c20e673.js'
Deleting 'admin/js/vendor/select2/i18n/nb.js'
Deleting 'admin/js/vendor/select2/i18n/cs.4f43e8e7d33a.js'
Deleting 'admin/js/vendor/select2/i18n/ca.js'
Deleting 'admin/js/vendor/select2/i18n/zh-CN.2cff662ec5f9.js'
Deleting 'admin/js/vendor/select2/i18n/tr.b5a0643d1545.js'
Deleting 'admin/js/vendor/select2/i18n/hsb.js'
Deleting 'admin/js/vendor/select2/i18n/tk.js'
Deleting 'admin/js/vendor/select2/i18n/pt-BR.js'
Deleting 'admin/js/vendor/select2/i18n/es.js'
Deleting 'admin/js/vendor/select2/i18n/th.f38c20b0221b.js'
Deleting 'admin/js/vendor/select2/i18n/ka.js'
Deleting 'admin/js/vendor/select2/i18n/hsb.fa3b55265efe.js'
Deleting 'admin/js/vendor/select2/i18n/is.js'
Deleting 'admin/js/vendor/select2/i18n/pt-BR.e1b294433e7f.js'
Deleting 'admin/js/vendor/select2/i18n/ms.js'
Deleting 'admin/js/vendor/select2/i18n/hu.6ec6039cb8a3.js'
Deleting 'admin/js/vendor/select2/i18n/fi.js'
Deleting 'admin/js/vendor/select2/i18n/et.2b96fd98289d.js'
Deleting 'admin/js/vendor/select2/i18n/he.js'
Deleting 'admin/js/vendor/select2/i18n/lt.23c7ce903300.js'
Deleting 'admin/js/vendor/select2/i18n/vi.js'
Deleting 'admin/js/vendor/select2/i18n/gl.d99b1fedaa86.js'
Deleting 'admin/js/vendor/select2/i18n/de.js'
Deleting 'admin/js/vendor/select2/i18n/ms.4ba82c9a51ce.js'
Deleting 'admin/js/vendor/select2/i18n/km.c23089cb06ca.js'
Deleting 'admin/js/vendor/select2/i18n/zh-TW.04554a227c2b.js'
Deleting 'admin/js/vendor/select2/i18n/pl.js'
Deleting 'admin/js/vendor/select2/i18n/hi.js'
Deleting 'admin/js/vendor/select2/i18n/lv.08e62128eac1.js'
Deleting 'admin/js/vendor/select2/i18n/sv.js'
Deleting 'admin/js/vendor/select2/i18n/it.be4fe8d365b5.js'
Deleting 'admin/js/vendor/select2/i18n/tr.js'
Deleting 'admin/js/vendor/select2/i18n/es.66dbc2652fb1.js'
Deleting 'admin/js/vendor/select2/i18n/uk.js'
Deleting 'admin/js/vendor/select2/i18n/sr.js'
Deleting 'admin/js/vendor/select2/i18n/ru.934aa95f5b5f.js'
Deleting 'admin/js/vendor/select2/i18n/el.27097f071856.js'
Deleting 'admin/js/vendor/select2/i18n/bg.39b8be30d4f0.js'
Deleting 'admin/js/vendor/select2/i18n/ar.js'
Deleting 'admin/js/vendor/select2/i18n/ru.js'
Deleting 'admin/js/vendor/select2/i18n/pt.js'
Deleting 'admin/js/vendor/select2/i18n/bg.js'
Deleting 'admin/js/vendor/select2/i18n/sr-Cyrl.js'
Deleting 'admin/js/vendor/select2/i18n/zh-TW.js'
Deleting 'admin/js/vendor/select2/i18n/en.cf932ba09a98.js'
Deleting 'admin/js/vendor/select2/i18n/el.js'
Deleting 'admin/js/vendor/select2/i18n/id.js'
Deleting 'admin/js/vendor/select2/i18n/sq.5636b60d29c9.js'
Deleting 'admin/js/vendor/select2/i18n/pt.33b4a3b44d43.js'
Deleting 'admin/js/vendor/select2/i18n/fr.05e0542fcfe6.js'
Deleting 'admin/js/vendor/select2/i18n/ne.3d79fd3f08db.js'
Deleting 'admin/js/vendor/select2/i18n/bn.js'
Deleting 'admin/js/vendor/select2/i18n/af.js'
Deleting 'admin/js/vendor/select2/i18n/hi.70640d41628f.js'
Deleting 'admin/js/vendor/select2/i18n/he.e420ff6cd3ed.js'
Deleting 'admin/js/vendor/select2/i18n/tk.7c572a68c78f.js'
Deleting 'admin/js/vendor/select2/i18n/it.js'
Deleting 'admin/js/vendor/select2/i18n/bs.js'
Deleting 'admin/js/vendor/select2/i18n/ko.js'
Deleting 'admin/js/vendor/select2/i18n/uk.8cede7f4803c.js'
Deleting 'admin/js/vendor/select2/i18n/nl.997868a37ed8.js'
Deleting 'admin/js/vendor/select2/i18n/eu.js'
Deleting 'admin/js/vendor/select2/i18n/da.766346afe4dd.js'
Deleting 'admin/js/vendor/select2/i18n/dsb.js'
Deleting 'admin/js/vendor/select2/i18n/mk.dabbb9087130.js'
Deleting 'admin/js/vendor/select2/i18n/ca.a166b745933a.js'
Deleting 'admin/js/vendor/select2/i18n/sr.5ed85a48f483.js'
Deleting 'admin/js/vendor/select2/i18n/en.js'
Deleting 'admin/js/vendor/select2/i18n/lv.js'
Deleting 'admin/js/vendor/select2/i18n/et.js'
Deleting 'admin/js/vendor/select2/i18n/da.js'
Deleting 'admin/js/vendor/select2/i18n/id.04debded514d.js'
Deleting 'admin/js/vendor/select2/i18n/ka.2083264a54f0.js'
Deleting 'admin/js/vendor/select2/i18n/hu.js'
Deleting 'admin/js/vendor/select2/i18n/sk.33d02cef8d11.js'
Deleting 'admin/js/vendor/select2/i18n/bs.91624382358e.js'
Deleting 'admin/js/vendor/select2/i18n/sr-Cyrl.f254bb8c4c7c.js'
Deleting 'admin/js/vendor/select2/i18n/fi.614ec42aa9ba.js'
Deleting 'admin/js/vendor/select2/i18n/is.3ddd9a6a97e9.js'
Deleting 'admin/js/vendor/select2/i18n/th.js'
Deleting 'admin/js/vendor/select2/i18n/sl.js'
Deleting 'admin/js/vendor/select2/i18n/ne.js'
Deleting 'admin/js/vendor/select2/i18n/az.270c257daf81.js'
Deleting 'admin/js/vendor/select2/i18n/hr.a2b092cc1147.js'
Deleting 'admin/js/vendor/select2/i18n/hy.c7babaeef5a6.js'
Deleting 'admin/js/vendor/select2/i18n/de.8a1c222b0204.js'
Deleting 'admin/js/vendor/select2/i18n/mk.js'
Deleting 'admin/js/vendor/xregexp/LICENSE.bf79e414957a.txt'
Deleting 'admin/js/vendor/xregexp/LICENSE.txt'
Deleting 'admin/js/vendor/xregexp/xregexp.min.js'
Deleting 'admin/js/vendor/xregexp/xregexp.efda034b9537.js'
Deleting 'admin/js/vendor/xregexp/xregexp.js'
Deleting 'admin/js/vendor/xregexp/xregexp.min.b0439563a5d3.js'
Deleting 'admin/js/vendor/jquery/jquery.min.js'
Deleting 'admin/js/vendor/jquery/LICENSE.txt'
Deleting 'admin/js/vendor/jquery/jquery.min.dc5e7f18c8d3.js'
Deleting 'admin/js/vendor/jquery/jquery.23c7c5d2d131.js'
Deleting 'admin/js/vendor/jquery/LICENSE.75308107741f.txt'
Deleting 'admin/js/vendor/jquery/jquery.js'
Deleting 'css/dark.css'
Deleting 'css/dark.934e9cd71cc4.css'
Deleting 'css/style.css'
Deleting 'css/light.css'
Deleting 'font/Sen-Bold.ttf.woff'
Deleting 'font/Sen-Regular.ttf.woff'
Deleting 'img/icon-close.svg'
Deleting 'img/banner-tube-archivist-dark.png'
Deleting 'img/banner-tube-archivist-light.png'
Deleting 'img/icon-seen.svg'
Deleting 'img/icon-play.svg'
Deleting 'img/icon-stop.svg'
Deleting 'img/icon-help.svg'
Deleting 'img/icon-gear.svg'
Deleting 'img/icon-gridview.svg'
Deleting 'img/icon-rescan.svg'
Deleting 'img/icon-unseen.svg'
Deleting 'img/icon-download.svg'
Deleting 'img/icon-add.svg'
Deleting 'img/icon-search.svg'
Deleting 'img/icon-listview.svg'
Deleting 'img/icon-thumb.svg'

152 static files copied to '/app/staticfiles', 476 post-processed.
[uWSGI] getting INI configuration from uwsgi.ini
*** Starting uWSGI 2.0.19.1 (64bit) on [Tue Oct  5 21:43:41 2021] ***
compiled with version: 10.2.1 20210110 on 05 October 2021 21:13:25
os: Linux-5.4.0-88-generic #99-Ubuntu SMP Thu Sep 23 17:29:00 UTC 2021
nodename: 283f1e8b559e
machine: x86_64
clock source: unix
detected number of CPU cores: 2
current working directory: /app
writing pidfile to /tmp/project-master.pid
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address :8080 fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Python version: 3.9.7 (default, Sep 28 2021, 18:41:28)  [GCC 10.2.1 20210110]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x56295e4ccff0
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 145808 bytes (142 KB) for 1 cores
*** Operational MODE: single process ***
ta_channel index is created and up to date...
ta_video index is created and up to date...
ta_download index is created and up to date...
sync redis
WSGI app 0 (mountpoint='') ready in 5 seconds on interpreter 0x56295e4ccff0 pid: 15 (default app)
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 15)
spawned uWSGI worker 1 (pid: 21, cores: 1)
/usr/local/lib/python3.9/site-packages/celery/platforms.py:834: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!

Please specify a different user using the --uid option.

User information: uid=0 euid=0 gid=0 egid=0

  warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(

 -------------- celery@283f1e8b559e v5.1.2 (sun-harmonics)
--- ***** -----
-- ******* ---- Linux-5.4.0-88-generic-x86_64-with-glibc2.31 2021-10-05 16:43:47
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app:         tasks:0x7f302cdad910
- ** ---------- .> transport:   redis://archivist-redis:6379//
- ** ---------- .> results:     disabled://
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery


[tasks]
  . home.tasks.check_reindex
  . home.tasks.download_pending
  . home.tasks.download_single
  . home.tasks.extrac_dl
  . home.tasks.run_backup
  . home.tasks.run_manual_import
  . home.tasks.run_restore_backup
  . home.tasks.update_subscribed

[2021-10-05 16:43:48,398: INFO/MainProcess] Connected to redis://archivist-redis:6379//
[2021-10-05 16:43:48,479: INFO/MainProcess] mingle: searching for neighbors
[2021-10-05 16:43:49,559: INFO/MainProcess] mingle: all alone
[2021-10-05 16:43:49,634: INFO/MainProcess] celery@283f1e8b559e ready.
[pid: 21|app: 0|req: 1/1] 192.168.0.150 () {42 vars in 753 bytes} [Tue Oct  5 21:44:43 2021] GET / => generated 4192 bytes in 274 msecs (HTTP/1.1 200) 7 headers in 349 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 2/2] 192.168.0.150 () {42 vars in 764 bytes} [Tue Oct  5 21:44:43 2021] GET /static/css/dark.934e9cd71cc4.css => generated 237 bytes in 4 msecs via sendfile() (HTTP/1.1 200) 10 headers in 344 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 3/3] 192.168.0.150 () {42 vars in 766 bytes} [Tue Oct  5 21:44:43 2021] GET /static/css/style.7ba40e75dac4.css => generated 2663 bytes in 2 msecs via sendfile() (HTTP/1.1 200) 10 headers in 346 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 4/4] 192.168.0.150 () {42 vars in 743 bytes} [Tue Oct  5 21:44:43 2021] GET /static/script.1de5bb2ef080.js => generated 2413 bytes in 2 msecs via sendfile() (HTTP/1.1 200) 10 headers in 353 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 5/5] 192.168.0.150 () {42 vars in 854 bytes} [Tue Oct  5 21:44:43 2021] GET /static/img/banner-tube-archivist-dark.edd9d4b3a1e5.png => generated 52255 bytes in 4 msecs via sendfile() (HTTP/1.1 200) 8 headers in 284 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 6/6] 192.168.0.150 () {42 vars in 820 bytes} [Tue Oct  5 21:44:43 2021] GET /static/img/icon-help.1d655f2f80a1.svg => generated 1756 bytes in 2 msecs via sendfile() (HTTP/1.1 200) 10 headers in 333 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 7/7] 192.168.0.150 () {42 vars in 820 bytes} [Tue Oct  5 21:44:43 2021] GET /static/img/icon-gear.3b8c10795a9c.svg => generated 1512 bytes in 2 msecs via sendfile() (HTTP/1.1 200) 10 headers in 333 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 8/8] 192.168.0.150 () {42 vars in 824 bytes} [Tue Oct  5 21:44:43 2021] GET /static/img/icon-search.fd22f6656b26.svg => generated 1320 bytes in 3 msecs via sendfile() (HTTP/1.1 200) 10 headers in 333 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 9/9] 192.168.0.150 () {42 vars in 828 bytes} [Tue Oct  5 21:44:43 2021] GET /static/img/icon-gridview.0c117ea41097.svg => generated 1212 bytes in 3 msecs via sendfile() (HTTP/1.1 200) 10 headers in 334 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 10/10] 192.168.0.150 () {42 vars in 828 bytes} [Tue Oct  5 21:44:43 2021] GET /static/img/icon-listview.48ef792820a6.svg => generated 1290 bytes in 2 msecs via sendfile() (HTTP/1.1 200) 10 headers in 333 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 11/11] 192.168.0.150 () {44 vars in 847 bytes} [Tue Oct  5 21:44:43 2021] GET /static/font/Sen-Regular.ttf.218bb0c177f2.woff => generated 28956 bytes in 2 msecs via sendfile() (HTTP/1.1 200) 8 headers in 296 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 12/12] 192.168.0.150 () {44 vars in 841 bytes} [Tue Oct  5 21:44:43 2021] GET /static/font/Sen-Bold.ttf.b032f49273fe.woff => generated 24936 bytes in 2 msecs via sendfile() (HTTP/1.1 200) 8 headers in 296 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 13/13] 192.168.0.150 () {46 vars in 861 bytes} [Tue Oct  5 21:44:43 2021] GET /static/favicon.b21cb294ff64.ico => generated 7661 bytes in 1 msecs via sendfile() (HTTP/1.1 200) 10 headers in 333 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 14/14] 192.168.0.150 () {44 vars in 868 bytes} [Tue Oct  5 21:45:01 2021] GET /channel/ => generated 4547 bytes in 110 msecs (HTTP/1.1 200) 7 headers in 349 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 15/15] 192.168.0.150 () {42 vars in 826 bytes} [Tue Oct  5 21:45:01 2021] GET /static/img/icon-add.19182e5c4ca3.svg => generated 1299 bytes in 4 msecs via sendfile() (HTTP/1.1 200) 10 headers in 333 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 16/16] 192.168.0.150 () {44 vars in 860 bytes} [Tue Oct  5 21:45:02 2021] GET / => generated 4192 bytes in 57 msecs (HTTP/1.1 200) 7 headers in 349 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 17/17] 192.168.0.150 () {44 vars in 868 bytes} [Tue Oct  5 21:45:06 2021] GET /channel/ => generated 4547 bytes in 62 msecs (HTTP/1.1 200) 7 headers in 349 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 18/18] 192.168.0.150 () {44 vars in 880 bytes} [Tue Oct  5 21:45:07 2021] GET /downloads/ => generated 4218 bytes in 77 msecs (HTTP/1.1 200) 7 headers in 349 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 19/19] 192.168.0.150 () {42 vars in 834 bytes} [Tue Oct  5 21:45:07 2021] GET /static/img/icon-rescan.4d903b41a4f8.svg => generated 1556 bytes in 2 msecs via sendfile() (HTTP/1.1 200) 10 headers in 333 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 20/20] 192.168.0.150 () {42 vars in 838 bytes} [Tue Oct  5 21:45:07 2021] GET /static/img/icon-download.ab2a56dc6336.svg => generated 1226 bytes in 3 msecs via sendfile() (HTTP/1.1 200) 10 headers in 333 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 21/21] 192.168.0.150 () {42 vars in 757 bytes} [Tue Oct  5 21:45:07 2021] GET /static/progress.8f315ada545f.js => generated 945 bytes in 2 msecs via sendfile() (HTTP/1.1 200) 10 headers in 351 bytes (0 switches on core 0)
[pid: 21|app: 0|req: 22/22] 192.168.0.150 () {42 vars in 731 bytes} [Tue Oct  5 21:45:08 2021] GET /downloads/progress => generated 17 bytes in 23 msecs (HTTP/1.1 200) 5 headers in 157 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 23/23] 192.168.0.150 () {44 vars in 878 bytes} [Tue Oct  5 21:45:14 2021] GET /channel/ => generated 4547 bytes in 50 msecs (HTTP/1.1 200) 7 headers in 349 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 24/24] 192.168.0.150 () {44 vars in 860 bytes} [Tue Oct  5 21:45:17 2021] GET / => generated 4192 bytes in 72 msecs (HTTP/1.1 200) 7 headers in 349 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 25/25] 192.168.0.150 () {44 vars in 868 bytes} [Tue Oct  5 21:45:18 2021] GET /channel/ => generated 4547 bytes in 45 msecs (HTTP/1.1 200) 7 headers in 349 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 26/26] 192.168.0.150 () {44 vars in 880 bytes} [Tue Oct  5 21:45:19 2021] GET /downloads/ => generated 4218 bytes in 66 msecs (HTTP/1.1 200) 7 headers in 349 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 27/27] 192.168.0.150 () {42 vars in 731 bytes} [Tue Oct  5 21:45:20 2021] GET /downloads/progress => generated 17 bytes in 12 msecs (HTTP/1.1 200) 5 headers in 157 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 28/28] 192.168.0.150 () {44 vars in 878 bytes} [Tue Oct  5 21:45:22 2021] GET /channel/ => generated 4547 bytes in 43 msecs (HTTP/1.1 200) 7 headers in 349 bytes (1 switches on core 0)

Cannot access over Firefox

I'm currently running TubeArchivist through docker on Unraid and have set up the redis and elasticsearch containers as instructed.

On Chrome I am able to access the web UI without issue, however when I try to access it through Firefox I receive the following log:

[2021-10-18 12:26:19,440: INFO/MainProcess] celery@b91bb94c1cab ready.

invalid request block size: 6559 (max 4096)...skip

invalid request block size: 6559 (max 4096)...skip

Whereas with Chrome it accepts the header and moves along. Any idea what can be done for this? I'd really rather avoid using Chrome if possible.

Trouble with 4k60 videos

I tried running some 4k 60FPS videos through Tubearchivist to see if they would download. They were able to go through, but the sound was missing.

Since it uses yt-dlp to download the videos, I tried running the videos through that to see if that was it. But the videos seemed to go through fine. Maybe it has something to do with the filetype? YouTube downloads above 1080p are sent out in .mkv, but Tubearchivist was able to convert it to .mp4.

Here are the videos I've used:

https://www.youtube.com/watch?v=LXb3EKWsInQ
https://www.youtube.com/watch?v=mkggXE5e2yk

Question about populating the archive manually

Looks like a great project. I was trying to use Plex Media Server to manage and share my music video collection, but Plex is not very good with music videos.

However, before I install & use it, I do have a few questions about TA (short for the tube archivist):

  1. I have 15-20 YT playlists, each having 10's to 100's of videos. Is it possible to directly populate TA from a playlist?
  2. I have an existing library on my hard disk - a collection of 1000's of MP4 videos - already downloaded from YT. The docs say that for each video I need to create a JSON file matching the video file. What are the contents of the JSON, or is empty?

Thanks and keep up the good work!

Elastic search and tubearchivist keep restarting

Hello,

I'm trying for a few days to run the docker, but the problem is that the Elastic Search keeps restarting followed by tubearchivist.

Have some way to keep the ES running so I can use the TubeArchivist?

The server has enough memory for all the images to run smothly.

Video date and ID

Is there a way to add the video date and ID to the end of the file name rather than the front of the file name? It makes manually searching more difficult when I see 20210919_2XnKoD1zCJU_ before the actual title of the video.

Help setting this up

Hey! I'm running trying to set this up on a QNAP NAS. It's running QTS's Docker Station (I'm not sure if any of this will be important information). Anyways, I pulled the Docker-compose file and just changed the locations for the files it'll pull. When I navigate to the URL for the server, the browser just displays this json:

{
"name" : "305f54c99168",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "B270t1WITSWJ1m1ooeqA1Q",
"version" : {
"number" : "7.14.1",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "66b55ebfa59c92c15db3f69a335d500018b3331e",
"build_date" : "2021-08-26T09:01:05.390870785Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

I also got a TON of messages in the console terminal, I'm not sure what any of them mean. I think I might actually have more of this, the terminal might've cleared away some of it when I was checking on something else. It's attached here --> https://pastebin.com/kggPyasL Some of these are about my currently high disk usage - funnily enough, a lot of those are YouTube videos, and I'd love to move them onto Tubearchivist as soon as I can get it working.

[Feature Request] Basic Auth for elasticsearch connection

Hi!

When you setup elasticsearch centrally it is recommended to setup security via basic auth.

I propose a configuration (environment variables) for basic auth username and password for connecting to an es instance.

Have a nice day!

KeyError: 'default_view'

Just updated from an old version (actually don't know which) to 0.0.5. I'm getting a key error on line 112 in views.py.

tubearchivist      | 2021-10-03T16:35:27.550508323Z Internal Server Error: /
tubearchivist      | 2021-10-03T16:35:27.550553105Z Traceback (most recent call last):
tubearchivist      | 2021-10-03T16:35:27.550557585Z   File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
tubearchivist      | 2021-10-03T16:35:27.550560572Z     response = get_response(request)
tubearchivist      | 2021-10-03T16:35:27.550563325Z   File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
tubearchivist      | 2021-10-03T16:35:27.550565996Z     response = wrapped_callback(request, *callback_args, **callback_kwargs)
tubearchivist      | 2021-10-03T16:35:27.550568542Z   File "/usr/local/lib/python3.9/site-packages/django/views/generic/base.py", line 70, in view
tubearchivist      | 2021-10-03T16:35:27.550571161Z     return self.dispatch(request, *args, **kwargs)
tubearchivist      | 2021-10-03T16:35:27.550573643Z   File "/usr/local/lib/python3.9/site-packages/django/views/generic/base.py", line 98, in dispatch
tubearchivist      | 2021-10-03T16:35:27.550576174Z     return handler(request, *args, **kwargs)
tubearchivist      | 2021-10-03T16:35:27.550578713Z   File "/app/./home/views.py", line 42, in get
tubearchivist      | 2021-10-03T16:35:27.550581349Z     colors, view_style, sort_order, hide_watched = self.read_config()
tubearchivist      | 2021-10-03T16:35:27.550583820Z   File "/app/./home/views.py", line 112, in read_config
tubearchivist      | 2021-10-03T16:35:27.550586455Z     view_style = config_handler["default_view"]["home"]
tubearchivist      | 2021-10-03T16:35:27.550589565Z KeyError: 'default_view'
tubearchivist      | 2021-10-03T16:35:27.550672734Z [pid: 31|app: 0|req: 34/34] 172.21.0.1 () {58 vars in 935 bytes} [Sun Oct  3 16:35:27 2021] GET / => generated 69214 bytes in 29 msecs (HTTP/1.1 500) 6 headers in 186 bytes (1 switches on core 0)

Exposing it to internet?

Thanks for this amazing script!

I've edited the docker-compose to change the path of the directories, and after running docker-compose up -d, all the images are build and started, but I'm not able to access it over the internet on port 8000.

My server has a static IP and all doors open, maybe on compose I need to insert 0.0.0.0:8000 to be able to access it?

[Feature Request] Ability for custom folder and file names

The ability to change the the format of the downloaded files and folder names
I have the same files shared with Plex and the file naming needed to have some specific data in to work for the metadata agent. I am able to modify the files names and folder after and it will work with a rescan but then new downloads break.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.