dangeroustech / streamdl Goto Github PK
View Code? Open in Web Editor NEWMonitor and Download Streams from a Variety of Websites
License: MIT License
Monitor and Download Streams from a Variety of Websites
License: MIT License
Dependabot can't resolve your Python dependency files.
As a result, Dependabot couldn't update your dependencies.
The error Dependabot encountered was:
Creating virtualenv streamdl-a9Mgguvh-py3.9 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...
PackageNotFound
Package streamlink (1.7.0) not found.
at /usr/local/.pyenv/versions/3.9.0/lib/python3.9/site-packages/poetry/repositories/pool.py:144 in package
140│ self._packages.append(package)
141│
142│ return package
143│
→ 144│ raise PackageNotFound("Package {} ({}) not found.".format(name, version))
145│
146│ def find_packages(
147│ self, dependency,
148│ ):
If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.
These take like an hour and really aren't worth it. Add instructions to the README if people want to build this themselves on < Pi 3b etc.
Current YAML is weird, doesn't do proper parsing, should be more like
twitch
- channel
- other_channel
youtube
- place
I was trying this out today to hopefully switch to from another project I've been using https://github.com/jrudess/streamdvr
The two have a different structure it seems. With streamdvr, it seems a little heavier I think, it can lag when you have 20 streamers monitored like I do. It mainly lags in starting and stopping the main process. That's one of the main reasons I'm hoping to move away from single threaded node to multithreaded python.
I noticed that when I Ctrl C out of streamdl (KeyboardInterrupt), the app doesn't gracefully close or finish the post processing of the .part
files. Could this be possible?
StreamDVR records streams to streamdvr/capturing
then when you either gracefully exit the process or when it is finished, it will take the .ts file and remux it with fixing the aac stream, then sending it to the streamdvr/captured
folder. https://github.com/jrudess/streamdvr/blob/master/scripts/postprocess_ffmpeg.sh
Although there is a downside to that for large streams. It can take a while, even an hour, to remux the file on a slower disk.
Would it be possible to support a similar folder structure to streamdvr?
It sorts stuff like this streamdvr/captured/TWITCH/username/username_twitch_20200515_211526.mp4
400mb is THICC
Feed the whale a diet plz
When the config file is updated on the host, it should be re-read and processed on the next tick.
Would it be possible to support this checking system of streams going live on Twitch? I haven't been able to find a script that uses this system. All monitoring seems to happen through a cron schedule system with all of the scripts.
This could solve the problem of checking the API in an interval timing out when there are too many users to check and also the beginning of the recording missing due to having to set a slower interval to combat the timeouts.
This is the info I've been able to find on it
https://dev.twitch.tv/docs/api/reference/#get-streams
https://discuss.dev.twitch.tv/t/webhook-twitch-start/21010
https://dev.twitch.tv/docs/api/webhooks-guide
Clearly you just forgot this one mate, sort it out pls.
Makes no sense to set up logging and parse all the args every single time we repeat. Better to set some other function for recursing through.
Make a config option (/maybe something in YAML?) to specify preferred quality (ideally best/worst but maybe defaults like 720p/1080p/etc
This should probably do some sort of check and properly fail if that quality isn't available. Should probably just advice people to use best anyway but still.
Currently there's only tags for staging/stable/latest but we should be tagging vx.y.z as well because reasons
App crashes after reaching maximum Python recursion depth, as it is essentially recursing based on the time that the user sets the flag to. Need to find out if there's a way to either break out of the process or do this a different way.
Describe the bug
Python hits a recursion limit and crashes, leaving the container up but not doing anything.
Expected behavior
For this not to happen. Or for the container/program to automatically restart.
Log Output
[36mstreamdl_1 |�[0m Process pistol:
�[36mstreamdl_1 |�[0m Traceback (most recent call last):
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
�[36mstreamdl_1 |�[0m self.run()
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/multiprocessing/process.py", line 99, in run
�[36mstreamdl_1 |�[0m self._target(*self._args, **self._kwargs)
�[36mstreamdl_1 |�[0m File "streamdl.py", line 221, in download_video
�[36mstreamdl_1 |�[0m ydl.download(["https://{}/{}/".format(url, user)])
�[36mstreamdl_1 |�[0m File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 2018, in download
�[36mstreamdl_1 |�[0m url, force_generic_extractor=self.params.get('force_generic_extractor', False))
�[36mstreamdl_1 |�[0m File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 796, in extract_info
�[36mstreamdl_1 |�[0m ie_result = ie.extract(url)
�[36mstreamdl_1 |�[0m File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/common.py", line 530, in extract
�[36mstreamdl_1 |�[0m ie_result = self._real_extract(url)
�[36mstreamdl_1 |�[0m File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/twitch.py", line 580, in _real_extract
�[36mstreamdl_1 |�[0m 'Downloading stream JSON').get('stream')
�[36mstreamdl_1 |�[0m File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/twitch.py", line 59, in _call_api
�[36mstreamdl_1 |�[0m *args, **compat_kwargs(kwargs))
�[36mstreamdl_1 |�[0m File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/common.py", line 892, in _download_json
�[36mstreamdl_1 |�[0m expected_status=expected_status)
�[36mstreamdl_1 |�[0m File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/common.py", line 870, in _download_json_handle
�[36mstreamdl_1 |�[0m expected_status=expected_status)
�[36mstreamdl_1 |�[0m File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/common.py", line 660, in _download_webpage_handle
�[36mstreamdl_1 |�[0m urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status)
�[36mstreamdl_1 |�[0m File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/extractor/common.py", line 627, in _request_webpage
�[36mstreamdl_1 |�[0m return self._downloader.urlopen(url_or_request)
�[36mstreamdl_1 |�[0m File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py", line 2237, in urlopen
�[36mstreamdl_1 |�[0m return self._opener.open(req, timeout=self._socket_timeout)
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/urllib/request.py", line 525, in open
�[36mstreamdl_1 |�[0m response = self._open(req, data)
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/urllib/request.py", line 543, in _open
�[36mstreamdl_1 |�[0m '_open', req)
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/urllib/request.py", line 503, in _call_chain
�[36mstreamdl_1 |�[0m result = func(*args)
�[36mstreamdl_1 |�[0m File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.7/site-packages/youtube_dl/utils.py", line 2724, in https_open
�[36mstreamdl_1 |�[0m req, **kwargs)
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/urllib/request.py", line 1322, in do_open
�[36mstreamdl_1 |�[0m r = h.getresponse()
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/http/client.py", line 1344, in getresponse
�[36mstreamdl_1 |�[0m response.begin()
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/http/client.py", line 330, in begin
�[36mstreamdl_1 |�[0m self.headers = self.msg = parse_headers(self.fp)
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/http/client.py", line 224, in parse_headers
�[36mstreamdl_1 |�[0m return email.parser.Parser(_class=_class).parsestr(hstring)
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/email/parser.py", line 67, in parsestr
�[36mstreamdl_1 |�[0m return self.parse(StringIO(text), headersonly=headersonly)
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/email/parser.py", line 56, in parse
�[36mstreamdl_1 |�[0m feedparser.feed(data)
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/email/feedparser.py", line 176, in feed
�[36mstreamdl_1 |�[0m self._call_parse()
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/email/feedparser.py", line 180, in _call_parse
�[36mstreamdl_1 |�[0m self._parse()
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/email/feedparser.py", line 256, in _parsegen
�[36mstreamdl_1 |�[0m if self._cur.get_content_type() == 'message/delivery-status':
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/email/message.py", line 578, in get_content_type
�[36mstreamdl_1 |�[0m value = self.get('content-type', missing)
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/email/message.py", line 471, in get
�[36mstreamdl_1 |�[0m return self.policy.header_fetch_parse(k, v)
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/email/_policybase.py", line 316, in header_fetch_parse
�[36mstreamdl_1 |�[0m return self._sanitize_header(name, value)
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/email/_policybase.py", line 287, in _sanitize_header
�[36mstreamdl_1 |�[0m if _has_surrogates(value):
�[36mstreamdl_1 |�[0m File "/usr/local/lib/python3.7/email/utils.py", line 57, in _has_surrogates
�[36mstreamdl_1 |�[0m s.encode()
�[36mstreamdl_1 |�[0m RecursionError: maximum recursion depth exceeded while calling a Python object
Then the next run comes back with these each time ytdl is launched:
[36mstreamdl_1 |�[0m 2020-04-02 00:24:08,300 WARNING: Unexpected Error: <class 'RecursionError'>
Platform (please complete the following information):
Config (please post your config.yml file:
twitch.tv
- pistol
Dependabot can't resolve your Python dependency files.
As a result, Dependabot couldn't update your dependencies.
The error Dependabot encountered was:
Creating virtualenv streamdl-_Hubs6jp-py3.9 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...
PackageNotFound
Package stevedore (3.2.0) not found.
at /usr/local/.pyenv/versions/3.9.0/lib/python3.9/site-packages/poetry/repositories/pool.py:144 in package
140│ self._packages.append(package)
141│
142│ return package
143│
→ 144│ raise PackageNotFound("Package {} ({}) not found.".format(name, version))
145│
146│ def find_packages(
147│ self, dependency,
148│ ):
If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.
Dependabot couldn't authenticate with https://pypi.python.org/simple/.
You can provide authentication details in your Dependabot dashboard by clicking into the account menu (in the top right) and selecting 'Config variables'.
Old protobuf implementation is causing security flags more and more because it's deprecated officially now. Should be upgrading to https://pkg.go.dev/google.golang.org/protobuf
Blog post here although no hint of a migration guide (guess we'll come up with that ourselves): https://go.dev/blog/protobuf-apiv2
Bumping PyYAML to current version has introduced a bug where we should include a Loader= parameter when calling the .Load function.
https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation
Describe the bug
docker-compose stop
terminates the container and leaves unreadable .part files.
Expected behavior
docker-compose stop
should soft kill youtube-dl and cause it to properly write out the currently downloading videos.
Ezpz, lemme grab them VODs baybee
Have the time set to 10 but we're ticking way quicker than every 10 mins, should this be seconds?
Is your feature request related to a problem? Please describe.
I'd like to see which downloads are active when I look at the logs.
Describe the solution you'd like
When viewing the logfile, or the docker logs, I'd like a log message for each user that's currently downloading so I can know whether it's safe to rebuild the container without losing an in-progress download.
Describe alternatives you've considered
Reading the DEBUG output for processes will provide this info, but if the log level is set to something like INFO to be less chatty than DEBUG, this is not readable.
Is your feature request related to a problem? Please describe.
No, docker support is just a good idea for folks who may already have container servers.
Describe the solution you'd like
The ability to:
Describe alternatives you've considered
Kubes but unless there's actual need for this we can keep it at docker for now.
Poetry is good, pipenv seems to be less than great, especially with packages that have OS dependent packages.
Presuming that people know what they're doing, allow them to pass custom options to YTDL .
This should fail properly if there is some sort of an error in their specified config.
For instance, if you know someone streams at 4k but you only want to save the 720p version, find a way to do this in either youtube_dl or after the fact using ffmpeg.
Will probably require splitting the docker buildpush step into 2 so we can build, scan, then push - and have Snyk scan not continue on error so it stops the pipeline if there's new vulns.
Is your feature request related to a problem? Please describe.
Logfile could potentially get to be a big boi, we should automatically rotate that.
Describe the solution you'd like
Auto log rotation to a certain size for 10 files (default, user should be able to configure this), then removal after that.
Describe alternatives you've considered
N/A
Describe the bug
Log parameter should insert a default filename if only given a path.
Expected behavior
If given a folder, it should append streamdl.log to the end.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.