Git Product home page Git Product logo

bbc / brave Goto Github PK

View Code? Open in Web Editor NEW
653.0 40.0 148.0 2.63 MB

Basic Real-time AV Editor - allowing you to preview, mix, and route live audio and video streams on the cloud

License: Apache License 2.0

Python 68.37% CMake 1.87% C++ 3.09% C 3.90% HTML 2.88% JavaScript 18.65% CSS 0.88% Dockerfile 0.24% Makefile 0.11%
gstreamer python multimedia rtmp video-streams video-handling live-streaming

brave's Introduction

Brave

Python 3.6 license

This project is an open-source prototype. See 'Project status' and 'License' below for more.

Brave is a Basic real-time (remote) audio/video editor. It allows LIVE video (and/or audio) to be received, manipulated, and sent elsewhere. It is API driven and is designed to work remotely, such as on the cloud.

Example usage includes:

  • Forwarding RTMP from one place to another
  • Changing the size of video, and having a holding slate if the input disappears
  • Mixing two or more inputs
  • Adding basic graphics (images, text, etc)
  • Previewing video streams using WebRTC

Brave is based on GStreamer. It is, in one sense, a RESTful API for GStreamer (for live audio/video handling).

To learn more, read below, or see the FAQ, API guide, How-to guide and Config file guide.

Architecture diagram

Architecture diagram

Web interface screenshot

Web interface screenshot

This web interface is optional; Brave can be controlled via the API or startup config file.

Alternatives to consider

Similar open-source projects to this include:

Capabilities

Brave allows you to configure inputs, outputs, mixers and overlays. You can have any number of each (subject to the limitations of your server). They can be created at startup using a config file, or created and changed dynamically via REST API.

Inputs

An input is a source of audio or video. There can be any number of inputs, added or removed at any time, which can then be sent to mixers and outputs. Input types include:

  • Live and non-live streams, through protocols such as RTMP, RTSP, and HLS
  • Files (e.g. mp4 or mp3) - either local or downloaded remotely
  • Images (PNG/SVG/JPEG)
  • MPEG or OGG retrieved via a TCP connection
  • Test audio / video streams

Read more about input types.

Outputs

An output is how the constructed audio/video is then sent, served or saved. There can be any number of outputs, added or removed at any time. Output types include:

  • RTMP - which can then send to Facebook Live and YouTube Live
  • TCP Server - which clients such as VLC can connect to
  • Local file - writing an mp4 file
  • Image - writing a JPEG file of the video periodically
  • WebRTC - for near-realtime previewing of the video (and audio)
  • AWS Kinesis Video Stream
  • Local - for playback on a local machine

Read more about output types.

Overlays

An overlay is something that can overlay the video from an input or mixer. (Overlays do not exist for audio.) There can be any number of overlays.

Supported overlay types:

  • Clock (place a clock over the video)
  • Text (write text over the video)
  • Effects

Read more about overlay types.

Mixers

There can be any number of mixers. They can take any number of inputs (including the output from another mixer). It can send to any number of outputs. Read more about mixers.

Project status

This project is still work in progress, and has not been thoroughly tested or used in any production environments.

Installation

First, install the dependencies, and then clone this repo.

Dependencies

  • Python 3.6 (or higher)
  • GStreamer 1.14.3 or higher (including the good/bad/ugly packages)
  • Multiple Python libraries (installed by pipenv)

Install guides

How to use

To start:

./brave.py

Brave has an API and web interface, which by default is on port 5000. So if running locally, access by pointing your web browser at:

http://localhost:5000/

To change the port, either set the PORT environment variable, or set api_port in the config file.

Configuring inputs, outputs, overlays and mixers

There are three ways to configure Brave:

  1. Web interface
  2. REST API (plus optional websocket)
  3. Config file

Web interface

The web interface is a simple client-side interface. It uses the API to allow the user to view and control Brave's setup.

The web interface can be found at http://localhost:5000/. (If running on a remote server, replacing localhost with the name of your server.)

API

The API allows read/write access of the state of Brave, including being able to create new inputs, outputs, and overlays dynamically. See the API documentation for more.

Config file

Brave can be configured by config file. This includes being able to have certain inputs, mixers, outputs and overlays created when Brave starts.

Provide another config with the -c parameter, e.g.

./brave.py -c config/example_empty.yaml

See the Config File documentation for more.

STUN and TURN servers for WebRTC

A STUN or TURN server is likely required for Brave's WebRTC to work between remote connections. Brave defaults to Google's public STUN server; this can be overridden in the config file, or by setting the STUN_SERVER environment variable. Likewise, a TURN_SERVER environment variable can be set if a TURN server is required. Its value should be in the format <usernane>:<credential>@<host>:<port>.

Tests

Brave has functional black-box tests that ensure the config file and API is working correctly. To run them:

pytest

A few useful pytest options:

  • To see the output, add -s.
  • To see the name of each test being run, add -v.
  • To run only failing tests, add --lf.
  • To filter to tests that match a string: -k <string_to_match>

All tests should pass.

Code quality (linting)

To check code quality, Flake8 is used. To run:

flake8 --count brave

Debugging

Brave is based on GStreamer, which is a complex beast. If you're experiencing errors or reliability issues, here's a guide on how to debug.

Run the tests

Run the test framework.

Logging

Brave outputs log messages, which should include all errors. To see finer grained logging, set LOG_LEVEL=debug, i.e.

LOG_LEVEL=debug ./brave.py

For even more, ask GStreamer to provide much more debug output with:

GST_DEBUG=4 LOG_LEVEL=debug ./brave.py

Analyse the elements

Brave creates multiple GStreamer pipelines, each containing multiple linked elements. Spotting which element has caused an error can help track down the problem.

To see, select 'Debug view' from the web interface. Or, visit the /api/elements API endpoint.

Look out for:

  • Elements not in the PLAYING state
  • Elements with different caps

If there are situations where it work and where it doesn't, try capturing the two /elements responses, and diffing them.

Switch off audio or video

If you're manipulating video that has audio, try disabling audio using enable_audio: false in the config file.

And then similarly, disabling video using enable_video: false.

This will help inform if it's the audio handling or video handling that's at fault.

Divide and conquer

If you repeatably get an error, identify what's causing it by removing inputs/outputs/overlays until the problem goes away. Try and find the minimum required to cause the problem.

License

Brave is licensed under the Apache 2 license.

Brave uses GStreamer which is licensed under the LGPL. GStreamer is dynamically linked, and is not distributed as part of the Brave codebase. Here is the GStreamer license. Here is the GStreamer licensing documentation.

Copyright (c) 2019 BBC

brave's People

Contributors

bevand10 avatar dependabot[bot] avatar lucasdavila86 avatar matthew1000 avatar moschopsuk avatar n0toose avatar niklasr avatar sidsethupathi avatar thomaspreecebbc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

brave's Issues

HTMLInput object has no attribute intervideosink

When attempting to add any html as input via the Web GUI, the following message is logged

Traceback (most recent call last):
  File "/root/.local/share/virtualenvs/brave-cXY7_SU8/lib/python3.6/site-packages/sanic/app.py", line 917, in handle_request
    response = await response
  File "/brave/brave/api/route_handler.py", line 13, in all
    'inputs': request['session'].inputs.summarise(),
  File "/brave/brave/abstract_collection.py", line 40, in summarise
    s.append(obj.summarise())
  File "/brave/brave/inputs/input.py", line 58, in summarise
    cap_props = self.get_input_cap_props()
  File "/brave/brave/inputs/html.py", line 48, in get_input_cap_props
    element = self.intervideosink
AttributeError: 'HTMLInput' object has no attribute 'intervideosink'

This is a docker instance build from master on Augh 29 2019

Crop for picture by picture

webrtc previews not working

I used the Dockerfile to build a docker image and deployed an instance. While everything works fine, the webrtc previews in the web interface fails with the error "ERROR: Failed to connect WebRTC with server". The browser console states "New ICE connection state: failed".

With LOG_LEVEL=DEBUG, I get the below logs in the console.

DEBUG: [ output1] Creating with pipeline: intervideosrc name=intervideosrc timeout=86400000000000 ! videoconvert ! videoscale ! videorate ! capsfilter name=capsfilter ! vp8enc deadline=1 keyframe-max-dist=30 ! rtpvp8pay ! application/x-rtp,format=RGB,media=video,encoding-name=VP8,payload=97,width=480,height=270 ! tee name=webrtc_video_tee webrtc_video_tee. ! fakesink interaudiosrc name=interaudiosrc ! audioconvert ! level message=true ! audioresample name=webrtc-audioresample ! opusenc bandwidth=superwideband ! rtpopuspay ! application/x-rtp,media=audio,encoding-name=OPUS,payload=96 ! tee name=webrtc_audio_tee webrtc_audio_tee. ! fakesink
DEBUG: [ input1] Pipeline state change from PAUSED to PAUSED (pending PAUSED)
DEBUG: [ input1] Message from GStreamer: Latency from output1_interaudiosink_857589
DEBUG: [ output1] Move to state READY complete
INFO: [api_routes] Created output #1 with details {'type': 'webrtc', 'source': 'input1'}
DEBUG: [ output1] Pipeline state change from NULL to READY
DEBUG: [ output1] Move to state PAUSED has completed but no data yet
DEBUG: [ input1] Pipeline state change from PAUSED to PAUSED (pending PLAYING)
DEBUG: [ input1] Pipeline state change from PAUSED to PLAYING
DEBUG: [ output1] Pipeline state change from READY to PAUSED
DEBUG: [ output1] Move to state PLAYING is IN PROGRESS
DEBUG: [ output1] Message from GStreamer: Latency from opusenc0
DEBUG: [ output1] Message from GStreamer: Latency from vp8enc0
DEBUG: [ output1] Pipeline state change from PAUSED to PLAYING
INFO: [ output1] I now have 1 peers
DEBUG: [ output1] Property notify: object="webrtcbin0", property_name="stun-server", property_value="stun://stun.l.google.com:19302"
DEBUG: [ output1] Pipeline state change from PLAYING to PAUSED (pending READY)
DEBUG: [ output1] Pipeline state change from PAUSED to READY
DEBUG: [ output1] Pipeline state change from READY to PAUSED (pending PLAYING)
DEBUG: [ output1] Successfully added a new peer request
DEBUG: [ output1] Message from GStreamer: Latency from opusenc0
DEBUG: [ output1] Message from GStreamer: Latency from vp8enc0
DEBUG: [ output1] Pipeline state change from PAUSED to PLAYING
DEBUG: [ output1] Sending SDP offer to client (1204 chars in length)
DEBUG: [ output1] Property notify: object="webrtcbin0", property_name="signaling-state", property_value="enum GST_WEBRTC_SIGNALING_STATE_HAVE_LOCAL_OFFER of type GstWebRTC.WebRTCSignalingState"
DEBUG: [ output1] Property notify: object="webrtcbin0", property_name="ice-gathering-state", property_value="enum GST_WEBRTC_ICE_GATHERING_STATE_COMPLETE of type GstWebRTC.WebRTCICEGatheringState"
DEBUG: [ output1] Property notify: object="webrtcbin0", property_name="signaling-state", property_value="enum GST_WEBRTC_SIGNALING_STATE_STABLE of type GstWebRTC.WebRTCSignalingState"
DEBUG: [ output1] Property notify: object="webrtcbin0", property_name="ice-connection-state", property_value="enum GST_WEBRTC_ICE_CONNECTION_STATE_CHECKING of type GstWebRTC.WebRTCICEConnectionState"
DEBUG: [ output1] Property notify: object="webrtcbin0", property_name="connection-state", property_value="enum GST_WEBRTC_PEER_CONNECTION_STATE_CONNECTING of type GstWebRTC.WebRTCPeerConnectionState"
DEBUG: [ output1] Property notify: object="webrtcbin0", property_name="ice-connection-state", property_value="enum GST_WEBRTC_ICE_CONNECTION_STATE_FAILED of type GstWebRTC.WebRTCICEConnectionState"
DEBUG: [ output1] Property notify: object="webrtcbin0", property_name="connection-state", property_value="enum GST_WEBRTC_PEER_CONNECTION_STATE_FAILED of type GstWebRTC.WebRTCPeerConnectionState"
DEBUG: [ output1] In PLAYING state: pipeline, queue9, queue8, webrtcbin0, capsfilter8, capsfilter7, fakesink3, webrtc_audio_tee, rtpopuspay0, opusenc0, webrtc-audioresample, level0, audioconvert1, interaudiosrc, fakesink2, webrtc_video_tee, rtpvp8pay0, vp8enc0, capsfilter, videorate0, videoscale2, videoconvert2, intervideosrc

Enabling more verbose logging spits out too much gibberish that I can't make sense of. How do I make the webrtc previews work?

Machine Specs

Hello! Thank you for this project, it's fantastic!

I ran it on an AWS EC2 t2.micro (1CPU, 1GB RAM) and it reaches 100% CPU usages (so it crashes). I then tried it on my machine (intel core i5 8th generation, 8GB RAM) and it works well with one RTMP input and one RTMP output, but if I add a mixer in the middle, brave crashes. Have any of you successfully tested Brave and care to share your machine's specs?

Thank you!

[Question] is HTML overlay possible?

I want to HTML overlay with CSS animations and all. I am able to achieve static overlays by converting the HTML into an image and merging it with the video frames using OpenCV but looking for something like the OBS studio browser overlay feature.

Split video inputs into a single Mixer

It's possible to split the view with multiple video inputs in a mixer?
With audio, we are able to hear all mixed audios at same time, but wirh videos, they are going in front of each other.

Someone has an idea of how I can put two or more videos side by side?

Here's my current config, to everyone who wants to try. (Just change to RTMP server).

default_mixer_height: 360
default_mixer_width: 640
enable_audio: true
enable_video: true
inputs:
- height: 360
  id: 5
  pattern: 18
  state: PLAYING
  type: test_video
  width: 640
- freq: 440
  id: 6
  state: PLAYING
  type: test_audio
  volume: 0.3
  wave: 3
- height: 360
  id: 7
  pattern: 1
  state: PLAYING
  type: test_video
  width: 640
- freq: 440
  id: 8
  state: PLAYING
  type: test_audio
  volume: 1.0
  wave: 8
mixers:
- height: 360
  id: 2
  pattern: 24
  sources:
  - in_mix: false
    uid: mixer3
  - in_mix: true
    uid: mixer4
  state: PLAYING
  type: mixer
  width: 640
- height: 360
  id: 3
  pattern: 0
  sources:
  - in_mix: false
    uid: input5
  - in_mix: false
    uid: input6
  - in_mix: false
    uid: mixer2
  state: PLAYING
  type: mixer
  width: 640
- height: 360
  id: 4
  pattern: 0
  sources:
  - in_mix: true
    uid: input7
  - in_mix: true
    uid: input8
  - in_mix: false
    uid: mixer2
  - in_mix: false
    uid: mixer3
  state: PLAYING
  type: mixer
  width: 640
outputs:
- height: 360
  id: 4
  source: mixer2
  state: PLAYING
  type: rtmp
  uri: {CHANGE HERE}
  width: 640
overlays:
- effect_name: vertigotv
  id: 1
  source: mixer4
  state: PLAYING
  type: effect
  visible: true

i am facing this issue with the library

INFO: [api_routes] Created mixer #1 with details {'pattern': '2', 'width': 640, 'height': 540, 'sources': [{'uid': 'input1', 'width': 60, 'height': 90}]}
WARNING: [ overlay1] Property not boolean: "true"
INFO: [api_routes] Created overlay #1 with details {'type': 'text', 'visible': 'true', 'text': '1', 'source': 'input1', 'font_size': 150}
INFO: [api_routes] Created output #1 with details {'type': 'image', 'width': 640, 'height': 540}
ERROR: [ output1] GStreamer error from sink: gst-resource-error-quark: Error while writing to file "/usr/local/share/brave/output_images/img_17863.jpg". (10)
ERROR: [ output1] GStreamer error debug: gstmultifilesink.c(793): gst_multi_file_sink_write_buffer (): /GstPipeline:pipeline3/GstMultiFileSink:sink:
No such file or directory
ERROR: [ output1] GStreamer error message: Error while writing to file "/usr/local/share/brave/output_images/img_17863.jpg".

Multiple webrtc preview

@moschopsuk Is there any settings/configs for multiple webrtc preview window? Instead of replacing existing webrtc play url, create new video element and add new webrtc play url. So both video player will play simulatenously with different input sources.

Failure to start

Hi

I thought I would investigate this project for live streaming. So far I've unfortunately gotten stuck on starting it.

System
macOS Big Sur 11.2.1

Steps to reproduce

[email protected]:bbc/brave.git
pip3 install --user pipenv
pipenv install
brew install libffi
(added to .bash_profile) export PKG_CONFIG_PATH="/usr/local/opt/libffi/lib/pkgconfig"
brew install libnice openssl librsvg libvpx srtp
brew install gstreamer
brew install gst-plugins-base
brew install gst-plugins-good
brew install gst-plugins-bad
brew install gst-plugins-ugly
brew install gst-libav gst-python
pipenv run python3 brave.py

Result

Error processing line 1 of /Users/testuser/.local/share/virtualenvs/brave-rDStWLUb/lib/python3.8/site-packages/ruamel.yaml-0.16.10-py3.8-nspkg.pth:

  Traceback (most recent call last):
    File "<frozen importlib._bootstrap_external>", line 1281, in _path_importer_cache
  KeyError: 'VextFinder.PATH_TRIGGER'

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site.py", line 169, in addpackage
      exec(line)
    File "<string>", line 1, in <module>
    File "<frozen importlib._bootstrap_external>", line 1189, in __contains__
    File "<frozen importlib._bootstrap_external>", line 1164, in _recalculate
    File "<frozen importlib._bootstrap_external>", line 1311, in _get_spec
    File "<frozen importlib._bootstrap_external>", line 1283, in _path_importer_cache
    File "<frozen importlib._bootstrap_external>", line 1259, in _path_hooks
    File "/Users/testuser/.local/share/virtualenvs/brave-rDStWLUb/lib/python3.8/site-packages/vext/gatekeeper/__init__.py", line 227, in __init__
      sitedir = getsyssitepackages()
    File "/Users/testuser/.local/share/virtualenvs/brave-rDStWLUb/lib/python3.8/site-packages/vext/env/__init__.py", line 99, in getsyssitepackages
      output = run()
    File "/Users/testuser/.local/share/virtualenvs/brave-rDStWLUb/lib/python3.8/site-packages/vext/env/__init__.py", line 42, in call_f
      output = subprocess.check_output(cmd, env=env).decode('utf-8')
    File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 411, in check_output
      return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
    File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 489, in run
      with Popen(*popenargs, **kwargs) as process:
    File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 854, in __init__
      self._execute_child(args, executable, preexec_fn, close_fds,
    File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 1583, in _execute_child
      and os.path.dirname(executable)
    File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/posixpath.py", line 152, in dirname
      p = os.fspath(p)
  TypeError: expected str, bytes or os.PathLike object, not NoneType

Remainder of file ignored
Error in sitecustomize; set PYTHONVERBOSE for traceback:
TypeError: expected str, bytes or os.PathLike object, not NoneType
Failed checking if argv[0] is an import path entry
Traceback (most recent call last):
  File "/Users/testuser/.local/share/virtualenvs/brave-rDStWLUb/lib/python3.8/site-packages/vext/gatekeeper/__init__.py", line 227, in __init__
    sitedir = getsyssitepackages()
  File "/Users/testuser/.local/share/virtualenvs/brave-rDStWLUb/lib/python3.8/site-packages/vext/env/__init__.py", line 99, in getsyssitepackages
    output = run()
  File "/Users/testuser/.local/share/virtualenvs/brave-rDStWLUb/lib/python3.8/site-packages/vext/env/__init__.py", line 42, in call_f
    output = subprocess.check_output(cmd, env=env).decode('utf-8')
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 411, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 489, in run
    with Popen(*popenargs, **kwargs) as process:
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 854, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 1583, in _execute_child
    and os.path.dirname(executable)
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/posixpath.py", line 152, in dirname
    p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType
Traceback (most recent call last):
  File "<frozen importlib._bootstrap_external>", line 1281, in _path_importer_cache
KeyError: 'VextFinder.PATH_TRIGGER'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "brave.py", line 10, in <module>
    import brave.session
  File "/Users/testuser/dev/experiment/brave/brave/session.py", line 6, in <module>
    from brave.helpers import get_logger
  File "/Users/testuser/dev/experiment/brave/brave/helpers.py", line 3, in <module>
    import gi
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 914, in _find_spec
  File "<frozen importlib._bootstrap_external>", line 1342, in find_spec
  File "<frozen importlib._bootstrap_external>", line 1311, in _get_spec
  File "<frozen importlib._bootstrap_external>", line 1283, in _path_importer_cache
  File "<frozen importlib._bootstrap_external>", line 1259, in _path_hooks
  File "/Users/testuser/.local/share/virtualenvs/brave-rDStWLUb/lib/python3.8/site-packages/vext/gatekeeper/__init__.py", line 227, in __init__
    sitedir = getsyssitepackages()
  File "/Users/testuser/.local/share/virtualenvs/brave-rDStWLUb/lib/python3.8/site-packages/vext/env/__init__.py", line 99, in getsyssitepackages
    output = run()
  File "/Users/testuser/.local/share/virtualenvs/brave-rDStWLUb/lib/python3.8/site-packages/vext/env/__init__.py", line 42, in call_f
    output = subprocess.check_output(cmd, env=env).decode('utf-8')
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 411, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 489, in run
    with Popen(*popenargs, **kwargs) as process:
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 854, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py", line 1583, in _execute_child
    and os.path.dirname(executable)
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/posixpath.py", line 152, in dirname
    p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType

Communication lost between intervideosink and intervideosrc

Hi there, congrats for the project!

After a couple or tests with different inputs and outputs, I just realized there was always some frozen frames, (no matter what the input/output combination was). Digging a bit on the code, I figured out that if I set the timeout to 0 in outputs/output.py like this:

def _video_pipeline_start(self):
    '''
    The standard start to the pipeline string for video.
    It starts with intervideosrc, which accepts video from the source.
    '''
    # The large timeout holds any stuck frame for 24 hours (basically, a very long time)
    # This is optional, but prevents it from going black when it's better to show the last frame.
    #timeout = Gst.SECOND * 60 * 60 * 24
    timeout = 0
    return ('intervideosrc name=intervideosrc timeout=%d ! videoconvert ! videoscale ! '
            'videorate ! capsfilter name=capsfilter ! ' % timeout)

The output stream is always having some black frames (Can be more or less depending on the video caps, but they're always there). Example with this config file:

mixers:
- height: 720
  id: 1
  pattern: 0
  sources: []
  state: PLAYING
  type: mixer
  width: 1280
outputs:
- audio_bitrate: 128000
  container: mpeg
  height: 720
  host: 127.0.1.1
  id: 1
  port: 7000
  source: mixer1
  state: PLAYING
  type: tcp
  width: 1280

Output of:

ffmpeg -i tcp://127.0.1.1:7000 -vf "blackdetect=d=0:pix_th=0.00" -an -f null -

Input #0, mpegts, from 'tcp://127.0.1.1:7000':
  Duration: N/A, start: 3601.793733, bitrate: N/A
  Program 1 
    Stream #0:0[0x41]: Video: h264 (High) (HDMV / 0x564D4448), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 30 fps, 30 tbr, 90k tbn, 60 tbc
    Stream #0:1[0x42](en): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 128 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> wrapped_avframe (native))
Press [q] to stop, [?] for help
Output #0, null, to 'pipe:':
  Metadata:
    encoder         : Lavf57.83.100
    Stream #0:0: Video: wrapped_avframe, yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc
    Metadata:
      encoder         : Lavc57.107.100 wrapped_avframe
[blackdetect @ 0x55a9d5dbf000] black_start:0.2 black_end:0.233344 black_duration:0.0333444
[blackdetect @ 0x55a9d5dbf000] black_start:2.1 black_end:2.13334 black_duration:0.0333444
[blackdetect @ 0x55a9d5dbf000] black_start:2.33334 black_end:2.36667 black_duration:0.0333222
[blackdetect @ 0x55a9d5dbf000] black_start:2.86667 black_end:2.9 black_duration:0.0333333
[blackdetect @ 0x55a9d5dbf000] black_start:4.6 black_end:4.63334 black_duration:0.0333444
[blackdetect @ 0x55a9d5dbf000] black_start:5.93334 black_end:5.96667 black_duration:0.0333222
[blackdetect @ 0x55a9d5dbf000] black_start:6.9 black_end:6.93334 black_duration:0.0333444
[blackdetect @ 0x55a9d5dbf000] black_start:7 black_end:7.03334 black_duration:0.0333444
[blackdetect @ 0x55a9d5dbf000] black_start:7.8 black_end:7.83334 black_duration:0.0333444
[blackdetect @ 0x55a9d5dbf000] black_start:7.86667 black_end:7.9 black_duration:0.0333333
[blackdetect @ 0x55a9d5dbf000] black_start:8.33334 black_end:8.36667 black_duration:0.0333222
[blackdetect @ 0x55a9d5dbf000] black_start:8.63334 black_end:8.66667 black_duration:0.0333222
[blackdetect @ 0x55a9d5dbf000] black_start:8.76667 black_end:8.8 black_duration:0.0333333
[blackdetect @ 0x55a9d5dbf000] black_start:10.6333 black_end:10.6667 black_duration:0.0333222

I tested separately an example pipleline with Gstreamer like this:

gst-launch-1.0 videotestsrc is-live=1 ! video/x-raw,width=1280,height=720,framerate=30/1 ! videoconvert ! intervideosink name=psink intervideosrc name=psrc timeout=0 ! video/x-raw,width=1280,height=720,framerate=30/1 ! videoconvert ! autovideosink

And, although its not behaving "perfect" (1 or 2 black/hour), it seems to be "loosing" less data than Brave.

Im running my tests in different computers, with/whithout virtual machines and with/without Docker, with minimal differences

Now my question is, Are you aware of this behave? Did you manage to solve or mitigate it?

Best Regards and thank you very much

Getting error on run pipenv run ./brave.py

Cannot start Rest API: Sanic instance cannot be unnamed. Please use Sanic(name='your_application_name') instead.
checking solution for this

I have fixed the error by adding sanic app name but after that app is running but getting insternal server error on url.

Using brave as RTMP server

Hi, I was wondering if there is a way to configure brave to act as a RTMP/RTSP server so that I can push a RTMP/RTSP stream to the server directly instead of using an input element to pull the stream from another server?

QUIC support ?

SRT versus QUIC...

Wondering if QUIC is on your radar as SRT seems to not support CMAF and so will probably die a slow death.

"GStreamer error: clock problem." when changing base resolution

My input and mixer objects are both 1280x720, but when I add an output stream it will automatically take the default resolution of 640x360 instead of it's source object.

So I changed the default width and height in my configuration, but then the output object (to rtmp) breaks with:
GStreamer error: clock problem.

This happens with both gstreamer 1.14.x and 1.16.x

The same error occurs when setting the resolution on the output object directly.

Websocket issue

Hi

I see in Server-Log

__
ERROR:root:Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/sanic/app.py", line 603, in handle_request
response = await response
File "/usr/lib/python3.6/site-packages/sanic/app.py", line 295, in websocket_handler
ws = await protocol.websocket_handshake(request, subprotocols)
File "/usr/lib/python3.6/site-packages/sanic/websocket.py", line 69, in websocket_handshake
key = handshake.check_request(get_header)
File "/usr/lib64/python3.6/site-packages/websockets/handshake.py", line 82, in check_request
[parse_connection(value) for value in headers.get_all('Connection')], []
AttributeError: 'function' object has no attribute 'get_all'
__

and on website I see issues, creating WS-Connection.
[Debug] Showing this top – "warning" – " message:" – "Server connection lost, retrying..." (index.js, line 53)
[Error] WebSocket connection to 'ws://18.184.174.7:5000/socket' failed: Unexpected response code: 500

Any particular things to consider when proxying through nginx for ssl?

Hello and congratulations on this awsome project.

Is there any things, like other ports, to consider when proxying through nginx to be able to publish as https?
I am trying with this configuration in nginx:
location /brave {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:5000;
proxy_redirect http://localhost:5000 https://example.domain.com;
}

Or there is maybe built-in functionality to enable ssl?
Thank you.

mixer wrong dimension and green

hey 🤚 great project!! ❤️

having the following problem.
when i have a couple of sources (uri, mp4) - and i cut_to_source them on the mixer1
on the first run it looks like:
(wrong color, and i think it is the wrong dimensions)

Bildschirmfoto 2019-10-28 um 10 34 45

clicking on the eye deleting it from the mixer, and cut_to_source again, fixes it
Bildschirmfoto 2019-10-28 um 10 57 42

if i add an output directly to the source, it is stable/normal from the beginning. so i suspect some problem in the mixer, do you have any idea, where i could start looking at?

FYI: Docker base system ubuntu 18.10 creates an error, maybe update?

Hello Gentlemen,

first of all, I love your project and I will fork it in the future since I started from a greenfield with the same approach which makes, now in the end hardly sense, to invent the wheel again.

As I am using Docker and I wanted to build your Container today I found that since a few days
apt packages are not maintained anymore for Ubuntu 18.10, so maybe this would be a good opportunity to upgrade?

E: The repository 'http://archive.ubuntu.com/ubuntu cosmic Release' does not have a Release file.

intersink/intersrc - A/V delay

having a simple input file running in a loop, and producing a TCP output.
produces - over time, like 12h+ a significant A/V delay.

it is not really about the time/hours, it seems once the cpu usage on the host is peaking/blocking the x264 encoder, audio continues flawless but video lags slightly behind.

folks at voc2mix, also have suffered that issue, they somehow overcome it with restructuring their pipelines. voc/voctomix#58

however, they cut out the tcp input - for that to work, and don't really support dynamic inputs (in general they drift more in a SDI-only direction)

currently i am playing around with: https://github.com/RidgeRun/gst-interpipe

you have any idea how to solve the problem (overscaling CPU is one thing, but its no guarantee that even the biggest cpu is working 100% of the running time) - restarting (stop/play) the mixer/output solves it but interrupts the service.

docs about the web api

Hi:

Greate project. I want to know if we can have more docs about the web api, I want to run brave on the server side.

Is brave dead?

Has there been anymore discussion and work with replacing intervideo interaudio with ridgeruns interpipes?

Setup CI

Currently we have no CI in place to, check branches are working before being merged in.
While internally we have out own Jenkins it would make it very hard for contributors to check the status of their builds so we would require something that is more open.

I recommend something like travis to checkout the code and run the python tests.

Unable to setup brave on Ubuntu 20.04

I am trying to setup brave on a ubuntu 20.04 virtual machine. I am unable to install the python-gst-1.0. I tried by installing python3-gst-1.0 instead but when I run pipenv install it gives an error:
An error occurred while installing pillow==6.2.0
Any idea what should be done in order to fix this issue and install brave successfully?

Volume can't be changed once set

Regression: attempts to change the volume of an input, once created, fail.

(The API accepts it but the change does not occur.)

I don't can run brave in my Mac m1

I done all item from this doc https://github.com/bbc/brave/blob/master/docs/install_macos.md
But I see this error

2022-11-22 14:21:40,379     INFO: [       run] Plugin nice is None
2022-11-22 14:21:40,379     INFO: [       run] Plugin webrtc is None
2022-11-22 14:21:40,379     INFO: [       run] Plugin dtls is None
2022-11-22 14:21:40,379     INFO: [       run] Plugin x264 is None
2022-11-22 14:21:40,379     INFO: [       run] Plugin srtp is None
2022-11-22 14:21:40,379     INFO: [       run] Plugin rtmp is None
2022-11-22 14:21:40,379     INFO: [       run] Plugin opus is <Gst.Plugin object at 0x10381c8c0 (GstPlugin at 0x12e2a5060)>
2022-11-22 14:21:40,379     INFO: [       run] Plugin vpx is <Gst.Plugin object at 0x101175140 (GstPlugin at 0x12e2ceae0)>
2022-11-22 14:21:40,379     INFO: [       run] Plugin multifile is <Gst.Plugin object at 0x10118a880 (GstPlugin at 0x12e2d4b10)>
2022-11-22 14:21:40,379     INFO: [       run] Plugin tcp is <Gst.Plugin object at 0x101116840 (GstPlugin at 0x12e2de060)>
2022-11-22 14:21:40,379     INFO: [       run] Plugin rtpmanager is <Gst.Plugin object at 0x10383a580 (GstPlugin at 0x12e2acc20)>
2022-11-22 14:21:40,379     INFO: [       run] Plugin videotestsrc is <Gst.Plugin object at 0x10383ac00 (GstPlugin at 0x12e2d43f0)>
2022-11-22 14:21:40,379     INFO: [       run] Plugin audiotestsrc is <Gst.Plugin object at 0x10383a580 (GstPlugin at 0x12e295880)>
2022-11-22 14:21:40,379  WARNING: [       run] Missing gstreamer plugins:['nice', 'webrtc', 'dtls', 'x264', 'srtp', 'rtmp']

Help me please!

brave fails to start on ubuntu 18.10

After installing all the dependencies on ubuntu 18.10 I was unable to start brave. This error shows up:
Missing gstreamer plugins: ['faac']

I was reading about the ubuntu packages and seems they dropped the faac support on the gstreamer bad plugins, maybe in favor of avenc_aac, avdec_aac (these ones are supported, see below)

gst-inspect-1.0 | grep aac
libav:  avenc_aac: libav AAC (Advanced Audio Coding) encoder
libav:  avdec_aac: libav AAC (Advanced Audio Coding) decoder
libav:  avdec_aac_fixed: libav AAC (Advanced Audio Coding) decoder
libav:  avdec_aac_latm: libav AAC LATM (Advanced Audio Coding LATM syntax) decoder
libav:  avmux_adts: libav ADTS AAC (Advanced Audio Coding) muxer (not recommended, use aacparse instead)
audioparsers:  aacparse: AAC audio stream parser
typefindfunctions: audio/aac: aac, adts, adif, loas
voaacenc:  voaacenc: AAC audio encoder

Unable to add element intervideosink

Hi there,

I have built the latest source of brave and performed simple test,

  • installed: GStreamer 1.14.3, python3.6 and all other preliminaries including libraries.
  • simple test pipeline: a video input -> mixer -> output(rtmp)

However it doesn't work at adding input to mixer.
I think 'intersink' has not been created properly.

Does anyone know how t solve it?

debug log

[2021-03-25 13:01:29 +0900] [45497] [INFO] Goin' Fast @ http://0.0.0.0:5000
INFO: [ output1] RTMP output now configured to send to rtmp://[test_rtmpserver_ip_addr]/live/livestream
ERROR: [ input1] Unable to add element intervideosink
ERROR: [ input1] Unable to add element queue
Traceback (most recent call last):
File "brave.py", line 71, in
start_brave()
File "brave.py", line 61, in start_brave
session.start()
File "/home/bkim/git/brave/brave/session.py", line 36, in start
self._setup_initial_inputs_outputs_mixers_and_overlays()
File "/home/bkim/git/brave/brave/session.py", line 89, in _setup_initial_inputs_outputs_mixers_and_overlays
mixer.setup_sources()
File "/home/bkim/git/brave/brave/mixers/mixer.py", line 94, in setup_sources
connection.add_to_mix(details)
File "/home/bkim/git/brave/brave/connections/connection_to_mixer.py", line 47, in add_to_mix
self._ensure_elements_are_created()
File "/home/bkim/git/brave/brave/connections/connection_to_mixer.py", line 186, in _ensure_elements_are_created
self._create_video_elements()
File "/home/bkim/git/brave/brave/connections/connection_to_mixer.py", line 212, in _create_video_elements
intervideosrc, intervideosink = self._create_inter_elements('video')
File "/home/bkim/git/brave/brave/connections/connection.py", line 99, in _create_inter_elements
intersink.set_property('channel', channel_name)
AttributeError: 'NoneType' object has no attribute 'set_property'

yaml configuration

enable_video: true
enable_audio: true

default_mixer_width: 640
default_mixer_height: 360

inputs:
- id: 1
type: test_video
pattern: 18

outputs:
- type: rtmp
uri: rtmp://[test_rtmpserver_ip_addr]/live/livestream
source: mixer1

mixers:
- sources:
- uid: input1
zorder: 2
width: 160
height: 90

latency max < min on rtmp output

Hello, I'm using Brave and it works just fine.

I found an issue and I was able to fix it. But I don't really know if this would have other side impacts. So I'm stating it here in order to contribute with my finding and maybe the authors will find it useful.

In some situations I get the following gstreamer error and the RTMP output goes red with a clock error.

The solution I found was to reduce the "key-int-max" value from 60 to 30, on "brave/outputs/rtmp.py". The issue is not easily repeatable. I could only have it consistent on an Azure VM. On another machine it stopped happening (I don't know why). On other machines it never happens.

I'm attaching some screen-shots that I hope will clarify this matter. I can submit a pull request if you think that is appropriate.

I would appreciate if the authors of Brave could share their considerations about this finding and the fix I propose. I have no experience with gstreamer.

Thank you
João

Screenshot_2019-04-09_19-52-56

Screenshot_2019-04-09_16-30-17

Performance Buffer Improvements

I am experimenting a lot with Brave as a tool for my live streaming and broadcasting ideas. So far I think I have understood a lot about it (and even implemented a tone of additional configuration parameters for myself — may PR at some point).

However, I often am running into this error message:

GStreamer warning debug: gstbasesink.c(3005): gboolean gst_base_sink_is_too_late(GstBaseSink *, GstMiniObject *, GstClockTime, GstClockTime, GstClockReturn, GstClockTimeDiff, gboolean) (): /GstPlayBin:playbin2/GstPlaySink:playsink/GstBin:vbin/GstBin:bin2/GstInterVideoSink:mixer100_intervideosink_350894:
0|braveWal | There may be a timestamping problem, or this computer is too slow.
0|braveWal |  WARNING: [  input900] GStreamer warning message: A lot of buffers are being dropped.

Considering that I am on an 3.1 GHz Quad-Core Intel Core i7 MacBook Pro from 2017 and that I am only consuming 3 RTMP streams and displaying one RTMP or TCP output sink?

I am looking for ideas about what I may be able to improve?

I already have set the x264enc to the preset=ultrafast for example.

I am glad about any pointers or help. Thank you 🌸.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.