Git Product home page Git Product logo

media's Introduction

Servo Media

Build Status

The servo-media crate contains the backend implementation to support all Servo multimedia related functionality. This is:

servo-media is supposed to run properly on Linux, macOS, Windows and Android. Check the build instructions for each specific platform.

servo-media is built modularly from different crates and it provides an abstraction that allows the implementation of multiple media backends. For now, the only functional backend is GStreamer. New backend implementations are required to implement the Backend trait. This trait is the public API that servo-media exposes to clients through the ServoMedia entry point. Check the examples folder to get a sense of how to use it effectively. Alternatively, you can also check how servo-media is integrated and used in Servo.

Requirements

So far the only supported and default backend is GStreamer. So in order to build this crate you need to install all gstreamer-rs dependencies for your specific platform as listed here.

Ubuntu Trusty

Ubuntu Trusty has very old GStreamer packages (1.2, while we need at least 1.16), so you need to manually build GStreamer >1.16 or alternatively run the etc/ubuntu_trusty_bootstrap.sh shell script, which downloads a pre-built bundle and sets up the required environment variables:

source etc/ubuntu_trusty_bootstrap.sh

Android

For Android there are some extra requirements.

First of all, you need to install the appropriate toolchain for your target. The recommended approach is to install it through rustup. Taking arm-linux-androideabi as our example target you need to do:

rustup target add arm-linux-androideabi

In addition to that, you also need to install the Android NDK. The recommended NDK version is r16b. The Android SDK is not mandatory but recommended for practical development.

Once you have the Android NDK installed in your machine, you need to create what the NDK itself calls a standalone toolchain.

 $ ${ANDROID_NDK}/build/tools/make-standalone-toolchain.sh \
   --platform=android-18 --toolchain=arm-linux-androideabi-4.9 \
   --install-dir=android-18-arm-toolchain --arch=arm

After that you need to tell Cargo where to find the Android linker and ar, which is in the standalone NDK toolchain we just created. To do that we configure the arm-linux-androideabi target in .cargo/config (or in ~/.cargo/config if you want to apply the setting globaly) with the linker value.

[target.arm-linux-androideabi]
linker = "<path-to-your-toolchain>/android-18-toolchain/bin/arm-linux-androideabi-gcc"
ar = "<path-to-your-toolchain>/android-18-toolchain/bin/arm-linux-androideabi-ar"

This crate indirectly depends on libgstreamer_android_gen: a tool to generate the required libgstreamer_android.so library with all GStreamer dependencies for Android and some Java code required to initialize GStreamer on Android.

The final step requires fetching or generating this dependency and setting the pkg-config to use libgstreamer_android.so. To do that, there's a helper script that will fetch the latest version of this dependency generated for Servo. To run the script do:

cd etc
./android_bootstrap.sh <target>

where target can be armeabi-v7 or x86.

After running the script, you will need to add the path to the pkg-config info for all GStreamer dependencies to your PKG_CONFIG_PATH environment variable The script will output the path and a command suggestion. For example:

export PKG_CONFIG_PATH=/Users/ferjm/dev/mozilla/media/etc/../gstreamer/armeabi-v7a/gst-build-armeabi-v7a/pkgconfig

If you want to generate your own libgstreamer_android.so bundle, check the documentation from that repo and tweak the helper script accordingly.

Build

For macOS, Windows, and Linux, simply run:

cargo build

For Android, run:

PKG_CONFIG_ALLOW_CROSS=1 cargo build --target=arm-linux-androideabi

Running the examples

Android

Make sure that you have adb installed and you have adb access to your Android device. Go to the examples/android folder and run:

source build.sh
./run.sh

media's People

Contributors

atouchet avatar avanthikaa avatar bjorn3 avatar bors-servo avatar ceyusa avatar collares avatar d3vsanchez avatar darkspirit avatar eerii avatar eijebong avatar emilio avatar fabricedesre avatar ferjm avatar georgeroman avatar gterzian avatar jdm avatar khodzha avatar manishearth avatar mrobinson avatar mukilan avatar philn avatar purplehairengineer avatar rafaelcaricio avatar sagudev avatar sdroege avatar simonsapin avatar sreeise avatar stevesweetney avatar xclaesse avatar yvt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

media's Issues

Support fan-in, fan-out, multiple connections between nodes

Currently we incorrectly upmix/downmix across ports instead of mixing multiple inputs to a single port.

We also don't support fan-in and fan-out for ports at all, and don't support multiple connections between nodes.

We need to:

  • Modify Edge so it contains a smallvec of port pairs. Modify the graph code to support this well
  • Modify the processing code to clone chunks in case of fan-out
  • Modify the caching code to store a smallvec of chunks
  • Modify the processing code to mix these chunks together (and ultimately sum them)

No need to use `boxfnonce`

<nox> Manishearth, ferjm: IMO we don't need boxfnonce, I did something similar for tasks in the script crate and it helped me quite a bit to make my own trait with a name specific to my domain (i.e. tasks).
<nox> Manishearth, ferjm: I.e. https://github.com/servo/media/blob/14de1191e3ee075c1295f8cc468208a815be8eb1/audio/src/node.rs#L192 this could be a trait which convey more information about what it is, with an impl for Box<F> where F: FnOnce.
<nox> Note that for the script tasks, I ended up having two traits anyway, TaskOnce and TaskBox, because if you try to compose the boxed ones directly you end up with multiple layers of boxing.

Panic running the audio decoder example

❯ RUST_BACKTRACE=1 cargo ex --bin audio_decoder ../resources/short.mp3
    Finished dev [unoptimized + debuginfo] target(s) in 0.13s
     Running `target/debug/audio_decoder ../resources/short.mp3`
Decoding audio
Audio decoded
thread 'AudioRenderThread' panicked at 'index 151808 out of range for slice of length 151719', libcore/slice/mod.rs:1962:5
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::print
             at libstd/sys_common/backtrace.rs:71
             at libstd/sys_common/backtrace.rs:59
   2: std::panicking::default_hook::{{closure}}
             at libstd/panicking.rs:211
   3: std::panicking::default_hook
             at libstd/panicking.rs:227
   4: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
             at libstd/panicking.rs:475
   5: std::panicking::continue_panic_fmt
             at libstd/panicking.rs:390
   6: std::panicking::try::do_call
             at libstd/panicking.rs:325
   7: core::ptr::drop_in_place
             at libcore/panicking.rs:77
   8: <core::ops::range::Range<Idx> as core::fmt::Debug>::fmt
             at libcore/slice/mod.rs:1962
   9: <alloc::collections::CollectionAllocErr as core::convert::From<core::alloc::AllocErr>>::from
             at /Users/travis/build/rust-lang/rust/src/libcore/slice/mod.rs:2127
  10: core::cmp::impls::<impl core::cmp::PartialEq for usize>::eq
             at /Users/travis/build/rust-lang/rust/src/libcore/slice/mod.rs:1944
  11: <f32 as core::ops::arith::AddAssign<&'a f32>>::add_assign
             at /Users/travis/build/rust-lang/rust/src/liballoc/vec.rs:1708
  12: servo_media_audio::buffer_source_node::AudioBufferSourceNode::handle_message
             at audio/src/buffer_source_node.rs:147
  13: servo_media_audio::graph::Edge::remove_by_pair
             at audio/src/graph.rs:342
  14: core::ptr::drop_in_place
             at ./audio/src/render_thread.rs:97
  15: core::ptr::drop_in_place
             at ./audio/src/render_thread.rs:178
  16: core::ptr::drop_in_place
             at ./audio/src/render_thread.rs:61
  17: std::sync::mpsc::sync::Blocker::BlockedReceiver
             at ./audio/src/context.rs:130
  18: core::sync::atomic::fence
             at /Users/travis/build/rust-lang/rust/src/libstd/sys_common/backtrace.rs:136
  19: alloc::alloc::dealloc
             at /Users/travis/build/rust-lang/rust/src/libstd/thread/mod.rs:409
  20: std::sync::mpsc::sync::Blocker::BlockedReceiver
             at /Users/travis/build/rust-lang/rust/src/libstd/panic.rs:313
  21: servo_media::ServoMedia::get::{{closure}}
             at /Users/travis/build/rust-lang/rust/src/libstd/panicking.rs:310
  22: panic_unwind::dwarf::eh::read_encoded_pointer
             at libpanic_unwind/lib.rs:105
  23: servo_media::ServoMedia::get::{{closure}}
             at /Users/travis/build/rust-lang/rust/src/libstd/panicking.rs:289
  24: std::sync::mpsc::sync::Blocker::BlockedReceiver
             at /Users/travis/build/rust-lang/rust/src/libstd/panic.rs:392
  25: alloc::alloc::dealloc
             at /Users/travis/build/rust-lang/rust/src/libstd/thread/mod.rs:408
  26: alloc::alloc::dealloc
             at /Users/travis/build/rust-lang/rust/src/liballoc/boxed.rs:640
  27: std::sys::unix::thread::Thread::new::thread_start
             at /Users/travis/build/rust-lang/rust/src/liballoc/boxed.rs:650
             at libstd/sys_common/thread.rs:24
             at libstd/sys/unix/thread.rs:90
  28: _pthread_body
  29: _pthread_start
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: RecvError', libcore/result.rs:945:5
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::print
             at libstd/sys_common/backtrace.rs:71
             at libstd/sys_common/backtrace.rs:59
   2: std::panicking::default_hook::{{closure}}
             at libstd/panicking.rs:211
   3: std::panicking::default_hook
             at libstd/panicking.rs:227
   4: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
             at libstd/panicking.rs:475
   5: std::panicking::continue_panic_fmt
             at libstd/panicking.rs:390
   6: std::panicking::try::do_call
             at libstd/panicking.rs:325
   7: core::ptr::drop_in_place
             at libcore/panicking.rs:77
   8: core::result::unwrap_failed
             at /Users/travis/build/rust-lang/rust/src/libcore/macros.rs:26
   9: <core::result::Result<T, E>>::unwrap
             at /Users/travis/build/rust-lang/rust/src/libcore/result.rs:782
  10: <servo_media_audio::context::AudioContext<B>>::close
             at ./audio/src/macros.rs:24
  11: audio_decoder::run_example
             at examples/audio_decoder.rs:62
  12: audio_decoder::main
             at examples/audio_decoder.rs:67
  13: std::rt::lang_start::{{closure}}
             at /Users/travis/build/rust-lang/rust/src/libstd/rt.rs:74
  14: std::panicking::try::do_call
             at libstd/rt.rs:59
             at libstd/panicking.rs:310
  15: panic_unwind::dwarf::eh::read_encoded_pointer
             at libpanic_unwind/lib.rs:105
  16: <std::sync::mutex::Mutex<T>>::new
             at libstd/panicking.rs:289
             at libstd/panic.rs:392
             at libstd/rt.rs:58
  17: std::rt::lang_start
             at /Users/travis/build/rust-lang/rust/src/libstd/rt.rs:74
  18: audio_decoder::main

New sink code isn't working

When I try this I get a single frozen videosink, and this error in the logs:

0:00:00.650705511 29662 0x7f35a8004ad0 WARN                 basesrc gstbasesrc.c:3055:gst_base_src_loop:<nicesrc3> error: Internal data stream error.
0:00:00.650721783 29662 0x7f35a8004ad0 WARN                 basesrc gstbasesrc.c:3055:gst_base_src_loop:<nicesrc3> error: streaming stopped, reason not-linked (-1)
0:00:00.907446112 29662 0x7f356048a6d0 WARN           basetransform gstbasetransform.c:1415:gst_base_transform_reconfigure:<videoconvert1> warning: not negotiated
0:00:00.907457873 29662 0x7f356048a6d0 WARN           basetransform gstbasetransform.c:1415:gst_base_transform_reconfigure:<videoconvert1> warning: not negotiated
0:00:00.956709545 29662 0x7f35bc02ee80 WARN                 basesrc gstbasesrc.c:3055:gst_base_src_loop:<nicesrc0> error: Internal data stream error.
0:00:00.956748859 29662 0x7f35bc02ee80 WARN                 basesrc gstbasesrc.c:3055:gst_base_src_loop:<nicesrc0> error: streaming stopped, reason not-negotiated (-4)

the "not linked" error is probably the relevant one, i suspect this cascades and breaks caps negotiation

the following patch which effectively reverts #186 fixes the problem. I'm not sure what's going on here, they're almost equivalent.

diff --git a/backends/gstreamer/src/webrtc.rs b/backends/gstreamer/src/webrtc.rs
index eff0af7..e5df76f 100644
--- a/backends/gstreamer/src/webrtc.rs
+++ b/backends/gstreamer/src/webrtc.rs
@@ -268,11 +268,13 @@ fn handle_media_stream(
             let q = gst::ElementFactory::make("queue", None).unwrap();
             let conv = gst::ElementFactory::make("audioconvert", None).unwrap();
             let resample = gst::ElementFactory::make("audioresample", None).unwrap();
+            let sink = gst::ElementFactory::make("autoaudiosink", None).unwrap();
 
-            pipe.add_many(&[&q, &conv, &resample])?;
-            gst::Element::link_many(&[&q, &conv, &resample])?;
+            pipe.add_many(&[&q, &conv, &resample, &sink])?;
+            gst::Element::link_many(&[&q, &conv, &resample, &sink])?;
 
             resample.sync_state_with_parent()?;
+            sink.sync_state_with_parent()?;
 
             let elements = vec![q.clone(), conv.clone(), resample];
             (q, conv, elements)
@@ -280,9 +282,11 @@ fn handle_media_stream(
         StreamType::Video => {
             let q = gst::ElementFactory::make("queue", None).unwrap();
             let conv = gst::ElementFactory::make("videoconvert", None).unwrap();
+            let sink = gst::ElementFactory::make("autovideosink", None).unwrap();
 
-            pipe.add_many(&[&q, &conv])?;
-            gst::Element::link_many(&[&q, &conv])?;
+            pipe.add_many(&[&q, &conv, &sink])?;
+            gst::Element::link_many(&[&q, &conv, &sink])?;
+            sink.sync_state_with_parent()?;
 
             let elements = vec![q.clone(), conv.clone()];
             (q, conv, elements)
diff --git a/backends/gstreamer/src/media_stream.rs b/backends/gstreamer/src/media_stream.rs
index 0623e1f..867fd55 100644
--- a/backends/gstreamer/src/media_stream.rs
+++ b/backends/gstreamer/src/media_stream.rs
@@ -156,6 +156,7 @@ impl MediaSink {
 
 impl MediaOutput for MediaSink {
     fn add_stream(&mut self, stream: Box<MediaStream>) {
+        return;
         {
             let stream = stream.as_any().downcast_ref::<GStreamerMediaStream>().unwrap();
             let last_element = stream.elements.last();

Support for <audio>

This is not a high priority right now, but we need to start thinking about this.

We need to accommodate the current design to support playback for HTMLMediaElements. We can start with support for <audio>.

Some random thoughts about this:

  • We need a different backend.
    For WebAudio we have a GStreamer src element that gets and deals with raw audio data.
    For <audio> (and <video>) we probably need a different element to, for example, read from http:// URIs, which typically means raw unidentified data that could be encoded or not, could be a live stream or not, etc.
    We may use GstPlayer for this, that does most part of the required work (via playbin). This is what @philn and @sdroege used for their

  • We need to support having <audio> elements as sources of WebAudio pipelines as well (check MediaElementAudioSourceNode), which makes things a bit harder. I think we can instruct GstPlayer to output the processed stream into a custom audio sink from where we can inject the resulting buffer into the WebAudio pipeline.

  • We need to differentiate between real-time and non-real-time audio rendering and figure out in general how (or even if we want/need) to reuse the current WebAudio graph as a wider media graph to manage the different streams coming from different media APIs.

Implement method of identifying AudioParams

Just like we have PortIds, we should have ParamIds, and for a given kind of node, each ParamId corresponds to a specific parameter.

This will let us refer to them from the DOM side and correctly implement nodes-connected-to-params

(I'll implement this soon)

The switch to AppSrc broke Android example

We switched from BaseSrc to AppSrc in #4 and this broke the Android example. We get sound in desktop but no sound at all in Android. I see this in the logcat:

05-17 11:23:57.494 7738 8344 W GStreamer+basesrc: 0:00:08.982054176 0xd4ae4af0 gstbasesrc.c:3055:gst_base_src_loop: error: Internal data stream error.
05-17 11:23:57.494 7738 8344 W GStreamer+basesrc: 0:00:08.982140165 0xd4ae4af0 gstbasesrc.c:3055:gst_base_src_loop: error: streaming stopped, reason not-negotiated (-4)
05-17 11:23:57.494 7738 8344 W GStreamer+basetransform: 0:00:08.982345530 0xd4ae4af0 gstbasetransform.c:1355:gst_base_transform_setcaps: transform could not transform audio/x-raw, format=(string)F32LE, layout=(string)interleaved, rate=(int)48000, channels=(int)1 in anything we support
05-17 11:23:57.495 7738 8344 W GStreamer+basetransform: 0:00:08.982536884 0xd4ae4af0 gstbasetransform.c:1355:gst_base_transform_setcaps: transform could not transform audio/x-raw, format=(string)F32LE, layout=(string)interleaved, rate=(int)48000, channels=(int)1 in anything we support
05-17 11:23:57.495 7738 8344 W GStreamer+basetransform: 0:00:08.982646415 0xd4ae4af0 gstbasetransform.c:1355:gst_base_transform_setcaps: transform could not transform audio/x-raw, format=(string)F32LE, layout=(string)interleaved, rate=(int)48000, channels=(int)1 in anything we support
05-17 11:23:57.495 7738 8344 W GStreamer+audiobasesink: 0:00:08.982688968 0xd4ae4af0 gstaudiobasesink.c:1119:gst_audio_base_sink_wait_event: error: Sink not negotiated before eos event.

Set up TravisCI

It should have a configuration that builds with android and build the dummy backend as well.

Thread safety and driving model

So currently we're not actually thread safe. There's an unsafe impl of Send/Sync on AudioGraphThread, which shouldn't exist.

We simultaneously need:

  • mutable access from the graph thread for things like adding nodes and play/pause
  • mutable access from gstreamer for running the graph.

This isn't good. We could solve it with locks, but that introduces two competing synchronization mechanisms. Instead I think we should have everything run by the graph thread (IIRC this was the original plan but we didn't really explore it), with gstreamer being isolated in a different thread requesting data.

The plan would be:

  • The graph thread is the only thing that touches the graph
  • when the sink needs data, it requests it from the graph thread, and waits
  • the graph thread processes incoming graph change messages, as well as incoming need_data messages. Whenever the source needs data, it sends it over.

We don't actually need a separate "gstreamer thread" setup, the graph thread can own the gstreamer sink and the need_data callbacks let this work asynchronously.

I'm a bit afraid that the render quantum of ~3ms may be affected by this model, though.

Race condition in GStreamer decoder

The audio_decoder example occasionally panics on this assertion. This happens a lot on CI (which has "1.5 threads" IIRC) but less so locally.

I investigated it a bit (it's pretty easy to reproduce under rr's chaos mode, rr record -h target/debug/audio_decoder).

The problem is that gstreamer decodes things on multiple threads (per-channel as far as I can tell), which both call new_sample(). When it reaches the end, eos() gets called. However, the new_sample() from the other thread may not have run yet -- eos() isn't guaranteed to run after all of the new_sample() calls finish.

One possible fix is to have a second RWlock -- always read-acquired by new_sample(). eos() write-acquires it and first drains appsink.pull_sample() into the buffers before sending the actual EOS signal. This is kinda ugly, I'm hoping there's a way we can ensure eos gets called after everything else, or have a way to wait in eos for everything to finish.

cc @ferjm @sdroege

AudioScheduledSourceNode handling is buggy and `.start(offset)` doesn't work right

Currently, node.start(t) doesn't seem to work for scheduled source nodes. The following patch fixes it, but overall I'm not sure we're doing the right thing with the two-return-value should_play_at function.

cc @ferjm could you look into this and figure out if this is the right fix? I'm using this fix so I can move ahead with my WPT investigation, but we should probably improve this code overall

diff --git a/audio/src/buffer_source_node.rs b/audio/src/buffer_source_node.rs
index 7847581..68d6ca2 100644
--- a/audio/src/buffer_source_node.rs
+++ b/audio/src/buffer_source_node.rs
@@ -118,10 +118,14 @@ impl AudioNodeEngine for AudioBufferSourceNode {
 
         let len = { self.buffer.as_ref().unwrap().len() as usize };
 
-        if self.playback_offset >= len || self.should_play_at(info.frame) == (false, true) {
+        let should = self.should_play_at(info.frame);
+        if self.playback_offset >= len || should == (false, true) {
             self.maybe_trigger_onended_callback();
             inputs.blocks.push(Default::default());
             return inputs;
+        } else if !should.0 {
+            inputs.blocks.push(Default::default());
+            return inputs;
         }

Report missing decoders and other audio decoder errors

We should be listening for pipeline bus error messages and firing the audio decoder error callback when needed.

philn> ferjm: there you go :)
12:27:54 travis needs ogg/vorbis libs
12:28:21 <ferjm> Missing decoder: Ogg (audio/ogg)
12:28:33 cool
12:30:18 we shouldn't be freezing there though :\
12:33:03 philn: I guess decodebin is throwing errors for missing decoders, right?
12:34:30 ⇐ BK1603 quit ([email protected]) Quit: Connection closed for inactivity
12:36:35 <philn> yeah
12:38:58 you should be able to reproduce the error locally by moving out libgstogg.so
12:39:16 <@Manishearth> philn: thanks :)
12:39:36 <ferjm> philn: ok, cool. Are these errors sent through as a pipeline bus message or is there an error callback I can handle in decodebin?
12:39:59 <@Manishearth> ferjm: in the example we wait for it to respond, we should .unwrap() the .recv() call
12:40:02 that's why it hangs
12:40:03 <philn> if you use gst-player you can connect to the error signal iirc
12:40:08 <@Manishearth> we get stuck on .recv()
12:40:26 <philn> otherwise yes, an error message is sent over the bus
12:41:10 <@Manishearth> philn: its a bug in the test's error handling
12:41:16 → rohitpaulk23833258 joined ([email protected])
12:41:22 <philn> ah, ok!
12:42:15 <ferjm> philn: ok, we are not using gst-player in this case
12:42:43 Manishearth:  https://github.com/servo/media/blob/master/examples/audio_decoder.rs#L50
12:43:17 <@Manishearth> ferjm: oh hm
12:43:18 <ferjm> we should be firing the error callback for the missing decoder and send a message through the channel there https://github.com/servo/media/blob/master/examples/audio_decoder.rs#L34
12:43:23 <@Manishearth> yeah
12:43:34 it's an eror handler so it doesn't panic, my bad
12:43:34 <ferjm> but we need to get the error from the pipeline bus first 

Unable to build project

Hello,

I'm getting the following error when building servo media:

error: aborting due to previous error

error: Could not compile examples.

Caused by:
process didn't exit successfully: rustc --crate-name offline_context examples/offline_context.rs --color always --crate-type bin --emit=dep-info,link -C debuginfo=2 -C metadata=aed0fb0205c5093d -C extra-filename=-aed0fb0205c5093d --out-dir /home/zimio/workspace/mozilla/media/target/debug/deps -C incremental=/home/zimio/workspace/mozilla/media/target/debug/incremental -L dependency=/home/zimio/workspace/mozilla/media/target/debug/deps --extern env_logger=/home/zimio/workspace/mozilla/media/target/debug/deps/libenv_logger-6ae0db42cd5a81be.rlib --extern euclid=/home/zimio/workspace/mozilla/media/target/debug/deps/libeuclid-240f7d3a788cc1ac.rlib --extern gleam=/home/zimio/workspace/mozilla/media/target/debug/deps/libgleam-a418cd17a3b482fc.rlib --extern glutin=/home/zimio/workspace/mozilla/media/target/debug/deps/libglutin-db916739f4394186.rlib --extern ipc_channel=/home/zimio/workspace/mozilla/media/target/debug/deps/libipc_channel-b969ce4051c98805.rlib --extern rand=/home/zimio/workspace/mozilla/media/target/debug/deps/librand-9f582797a01a5572.rlib --extern servo_media=/home/zimio/workspace/mozilla/media/target/debug/deps/libservo_media-b045ad578ba4e888.rlib --extern time=/home/zimio/workspace/mozilla/media/target/debug/deps/libtime-db1609a5ae642d87.rlib --extern webrender=/home/zimio/workspace/mozilla/media/target/debug/deps/libwebrender-7ceb9a0165526e89.rlib --extern winit=/home/zimio/workspace/mozilla/media/target/debug/deps/libwinit-2f6826a86cd892ac.rlib -L native=/home/zimio/workspace/mozilla/media/target/debug/build/libloading-b7b9b3538b7e1fe0/out -L native=/usr/local/lib -L native=/usr/local/lib -L native=/usr/local/lib -L native=/usr/local/lib -L native=/usr/local/lib -L native=/usr/local/lib -L native=/usr/lib/x86_64-linux-gnu (exit code: 1)

It seems there is code in examples which it cannot compile.

Here is the output of rustup show:

(base) ➜ media git:(master) ✗ rustup show
Default host: x86_64-unknown-linux-gnu

installed toolchains

stable-x86_64-unknown-linux-gnu
nightly-2018-10-05-x86_64-unknown-linux-gnu (default)

active toolchain

nightly-2018-10-05-x86_64-unknown-linux-gnu (default)
rustc 1.31.0-nightly (8c4ad4e9e 2018-10-04)

So I'm using nightly correctly, yet it seems that I can't build this.

Full graph support

Currently we just have a vector of nodes. We need the ability to have a full graph, using the cycle-breaking algorithm from the spec.

Probably should use petgraph.

merge rust-playground into media crate

I would like to hear your opinion to merge rust-playground into this crate.

The general idea is to simplify the merge to servo, provide a similar API for audio and video, and share common resources (I'm thinking, for example, the player objects or playbin).

Channel support

With #17 we have a chunk and block abstraction, but the block abstraction only supports a single channel.

We should expand this to allow for more channels (stored in the same buffer, probably), as well as upmixing/downmixing.

Update Android example

We made several changes to the API and on the repo layout with the multi-crate split up that broke the example for Android.

AudioParam support

most parameters can be set to be tweaked dynamically.

They can both be such that they get updated each block, and that they get updated each frame (k-type vs a-type)

We need some abstractions for both kinds of params, and for each kind of way they can vary.

We'll also need some way of identifying each parameter whilst messaging the thread. (Kind of needs #19 to exist first, in some rudimentary form)

Implement parser for Media Fragment URIs

Turn on GStreamer for non-x86

We disabled GStreamer for non-x86 in #96 so we could land the WebAudio API in Servo. We should turn it on again as soon as possible.

Extend Player error API

Currently the PlayerError enum in gstreamer-rs is just a stub. We could extend it with the errors that we care about based on our uses cases.

This issue will have one part to update gstreamer-rs and another part to make servo-media use the new set of errors.

Intermittent audio decoder test panic

thread 'main' panicked at 'assertion failed: buf.len() == buffers[0].len()', audio/src/buffer_source_node.rs:206:13
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::print
             at libstd/sys_common/backtrace.rs:71
             at libstd/sys_common/backtrace.rs:59
   2: std::panicking::default_hook::{{closure}}
             at libstd/panicking.rs:211
   3: std::panicking::default_hook
             at libstd/panicking.rs:227
   4: std::panicking::rust_panic_with_hook
             at libstd/panicking.rs:477
   5: std::panicking::begin_panic
             at libstd/panicking.rs:411
   6: servo_media_audio::buffer_source_node::AudioBuffer::from_buffers
             at audio/src/buffer_source_node.rs:206
   7: <servo_media_audio::buffer_source_node::AudioBuffer as core::convert::From<alloc::vec::Vec<alloc::vec::Vec<f32>>>>::from
             at audio/src/buffer_source_node.rs:233
   8: <T as core::convert::Into<U>>::into
             at libcore/convert.rs:456
   9: audio_decoder::run_example
             at examples/audio_decoder.rs:67
  10: audio_decoder::main
             at examples/audio_decoder.rs:77
  11: std::rt::lang_start::{{closure}}
             at libstd/rt.rs:74
  12: std::panicking::try::do_call
             at libstd/rt.rs:59
             at libstd/panicking.rs:310
  13: __rust_maybe_catch_panic
             at libpanic_unwind/lib.rs:102
  14: std::rt::lang_start_internal
             at libstd/panicking.rs:289
             at libstd/panic.rs:392
             at libstd/rt.rs:58
  15: std::rt::lang_start
             at libstd/rt.rs:74
  16: main
  17: __libc_start_main
  18: <unknown>

Support constraints for media capture

This code creates a video or audio stream with default constraints. The constraint API should be lifted out of the gstreamer backend into the high level library, so that it's possible for consumers of this crate to request an input stream that matches the given constraints. This will require adding additional arguments to the create_videoinput_stream and create_audioinput_stream functions and passing them down the stack until the create_input_stream method in the gstreamer backend.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.