Git Product home page Git Product logo

streamdl's People

Contributors

biodrone avatar dependabot-preview[bot] avatar dependabot[bot] avatar snyk-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

streamdl's Issues

Dependabot can't resolve your Python dependency files

Dependabot can't resolve your Python dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

Creating virtualenv streamdl-a9Mgguvh-py3.9 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...

  PackageNotFound

  Package streamlink (1.7.0) not found.

  at /usr/local/.pyenv/versions/3.9.0/lib/python3.9/site-packages/poetry/repositories/pool.py:144 in package
      140│                     self._packages.append(package)
      141│ 
      142│                     return package
      143│ 
    → 144│         raise PackageNotFound("Package {} ({}) not found.".format(name, version))
      145│ 
      146│     def find_packages(
      147│         self, dependency,
      148│     ):

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

View the update logs.

[FIX] - Proper YAML Structure

Current YAML is weird, doesn't do proper parsing, should be more like

twitch
  - channel
  - other_channel
youtube
  - place

[FEATURE] Modify Folder Structure and Allow User Configurable 'Move When Finished' Dir

I was trying this out today to hopefully switch to from another project I've been using https://github.com/jrudess/streamdvr

The two have a different structure it seems. With streamdvr, it seems a little heavier I think, it can lag when you have 20 streamers monitored like I do. It mainly lags in starting and stopping the main process. That's one of the main reasons I'm hoping to move away from single threaded node to multithreaded python.

I noticed that when I Ctrl C out of streamdl (KeyboardInterrupt), the app doesn't gracefully close or finish the post processing of the .part files. Could this be possible?

StreamDVR records streams to streamdvr/capturing then when you either gracefully exit the process or when it is finished, it will take the .ts file and remux it with fixing the aac stream, then sending it to the streamdvr/captured folder. https://github.com/jrudess/streamdvr/blob/master/scripts/postprocess_ffmpeg.sh

Although there is a downside to that for large streams. It can take a while, even an hour, to remux the file on a slower disk.

Would it be possible to support a similar folder structure to streamdvr?
It sorts stuff like this streamdvr/captured/TWITCH/username/username_twitch_20200515_211526.mp4

[FEATURE] - Webhook support for Twitch

Would it be possible to support this checking system of streams going live on Twitch? I haven't been able to find a script that uses this system. All monitoring seems to happen through a cron schedule system with all of the scripts.

This could solve the problem of checking the API in an interval timing out when there are too many users to check and also the beginning of the recording missing due to having to set a slower interval to combat the timeouts.

This is the info I've been able to find on it
https://dev.twitch.tv/docs/api/reference/#get-streams
https://discuss.dev.twitch.tv/t/webhook-twitch-start/21010
https://dev.twitch.tv/docs/api/webhooks-guide

[FEATURE] - Twitch - Specify Preferred Download Quality

Make a config option (/maybe something in YAML?) to specify preferred quality (ideally best/worst but maybe defaults like 720p/1080p/etc

This should probably do some sort of check and properly fail if that quality isn't available. Should probably just advice people to use best anyway but still.

Crashes After Reaching Maximum Python Recursion Depth

App crashes after reaching maximum Python recursion depth, as it is essentially recursing based on the time that the user sets the flag to. Need to find out if there's a way to either break out of the process or do this a different way.

[BUG] - Recursion Error is Back

Describe the bug
Python hits a recursion limit and crashes, leaving the container up but not doing anything.

Expected behavior
For this not to happen. Or for the container/program to automatically restart.

Log Output

[36mstreamdl_1  |�[0m Process pistol:
�[36mstreamdl_1  |�[0m Traceback (most recent call last):
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
�[36mstreamdl_1  |�[0m     self.run()
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/multiprocessing/process.py", line 99, in run
�[36mstreamdl_1  |�[0m     self._target(*self._args, **self._kwargs)
�[36mstreamdl_1  |�[0m   File "streamdl.py", line 221, in download_video
�[36mstreamdl_1  |�[0m     ydl.download(["https://{}/{}/".format(url, user)])
�[36mstreamdl_1  |�[0m   File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 2018, in download
�[36mstreamdl_1  |�[0m     url, force_generic_extractor=self.params.get('force_generic_extractor', False))
�[36mstreamdl_1  |�[0m   File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 796, in extract_info
�[36mstreamdl_1  |�[0m     ie_result = ie.extract(url)
�[36mstreamdl_1  |�[0m   File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/common.py", line 530, in extract
�[36mstreamdl_1  |�[0m     ie_result = self._real_extract(url)
�[36mstreamdl_1  |�[0m   File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/twitch.py", line 580, in _real_extract
�[36mstreamdl_1  |�[0m     'Downloading stream JSON').get('stream')
�[36mstreamdl_1  |�[0m   File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/twitch.py", line 59, in _call_api
�[36mstreamdl_1  |�[0m     *args, **compat_kwargs(kwargs))
�[36mstreamdl_1  |�[0m   File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/common.py", line 892, in _download_json
�[36mstreamdl_1  |�[0m     expected_status=expected_status)
�[36mstreamdl_1  |�[0m   File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/common.py", line 870, in _download_json_handle
�[36mstreamdl_1  |�[0m     expected_status=expected_status)
�[36mstreamdl_1  |�[0m   File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/common.py", line 660, in _download_webpage_handle
�[36mstreamdl_1  |�[0m     urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status)
�[36mstreamdl_1  |�[0m   File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/common.py", line 627, in _request_webpage
�[36mstreamdl_1  |�[0m     return self._downloader.urlopen(url_or_request)
�[36mstreamdl_1  |�[0m   File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 2237, in urlopen
�[36mstreamdl_1  |�[0m     return self._opener.open(req, timeout=self._socket_timeout)
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/urllib/request.py", line 525, in open
�[36mstreamdl_1  |�[0m     response = self._open(req, data)
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/urllib/request.py", line 543, in _open
�[36mstreamdl_1  |�[0m     '_open', req)
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/urllib/request.py", line 503, in _call_chain
�[36mstreamdl_1  |�[0m     result = func(*args)
�[36mstreamdl_1  |�[0m   File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/utils.py", line 2724, in https_open
�[36mstreamdl_1  |�[0m     req, **kwargs)
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/urllib/request.py", line 1322, in do_open
�[36mstreamdl_1  |�[0m     r = h.getresponse()
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/http/client.py", line 1344, in getresponse
�[36mstreamdl_1  |�[0m     response.begin()
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/http/client.py", line 330, in begin
�[36mstreamdl_1  |�[0m     self.headers = self.msg = parse_headers(self.fp)
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/http/client.py", line 224, in parse_headers
�[36mstreamdl_1  |�[0m     return email.parser.Parser(_class=_class).parsestr(hstring)
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/email/parser.py", line 67, in parsestr
�[36mstreamdl_1  |�[0m     return self.parse(StringIO(text), headersonly=headersonly)
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/email/parser.py", line 56, in parse
�[36mstreamdl_1  |�[0m     feedparser.feed(data)
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/email/feedparser.py", line 176, in feed
�[36mstreamdl_1  |�[0m     self._call_parse()
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/email/feedparser.py", line 180, in _call_parse
�[36mstreamdl_1  |�[0m     self._parse()
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/email/feedparser.py", line 256, in _parsegen
�[36mstreamdl_1  |�[0m     if self._cur.get_content_type() == 'message/delivery-status':
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/email/message.py", line 578, in get_content_type
�[36mstreamdl_1  |�[0m     value = self.get('content-type', missing)
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/email/message.py", line 471, in get
�[36mstreamdl_1  |�[0m     return self.policy.header_fetch_parse(k, v)
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/email/_policybase.py", line 316, in header_fetch_parse
�[36mstreamdl_1  |�[0m     return self._sanitize_header(name, value)
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/email/_policybase.py", line 287, in _sanitize_header
�[36mstreamdl_1  |�[0m     if _has_surrogates(value):
�[36mstreamdl_1  |�[0m   File "/usr/local/lib/python3.7/email/utils.py", line 57, in _has_surrogates
�[36mstreamdl_1  |�[0m     s.encode()
�[36mstreamdl_1  |�[0m RecursionError: maximum recursion depth exceeded while calling a Python object

Then the next run comes back with these each time ytdl is launched:
[36mstreamdl_1 |�[0m 2020-04-02 00:24:08,300 WARNING: Unexpected Error: <class 'RecursionError'>

Platform (please complete the following information):

  • Most recent docker container

Config (please post your config.yml file:

twitch.tv
- pistol

Dependabot can't resolve your Python dependency files

Dependabot can't resolve your Python dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

Creating virtualenv streamdl-_Hubs6jp-py3.9 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...

  PackageNotFound

  Package stevedore (3.2.0) not found.

  at /usr/local/.pyenv/versions/3.9.0/lib/python3.9/site-packages/poetry/repositories/pool.py:144 in package
      140│                     self._packages.append(package)
      141│ 
      142│                     return package
      143│ 
    → 144│         raise PackageNotFound("Package {} ({}) not found.".format(name, version))
      145│ 
      146│     def find_packages(
      147│         self, dependency,
      148│     ):

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

View the update logs.

[FEATURE] Add a Currently Downloading Log Message

Is your feature request related to a problem? Please describe.
I'd like to see which downloads are active when I look at the logs.

Describe the solution you'd like
When viewing the logfile, or the docker logs, I'd like a log message for each user that's currently downloading so I can know whether it's safe to rebuild the container without losing an in-progress download.

Describe alternatives you've considered
Reading the DEBUG output for processes will provide this info, but if the log level is set to something like INFO to be less chatty than DEBUG, this is not readable.

Docker Support

Is your feature request related to a problem? Please describe.
No, docker support is just a good idea for folks who may already have container servers.

Describe the solution you'd like
The ability to:

  • Build a custom container with a Dockerfile
  • Run docker-compose up -d and have it 'just work'
  • Have properly documented config to change things like container name, volume mount etc.

Describe alternatives you've considered
Kubes but unless there's actual need for this we can keep it at docker for now.

Log Rotation

Is your feature request related to a problem? Please describe.
Logfile could potentially get to be a big boi, we should automatically rotate that.

Describe the solution you'd like
Auto log rotation to a certain size for 10 files (default, user should be able to configure this), then removal after that.

Describe alternatives you've considered
N/A

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.