Git Product home page Git Product logo

webm-muxer's Introduction

webm-muxer - JavaScript WebM multiplexer

The WebCodecs API provides low-level access to media codecs, but provides no way of actually packaging (multiplexing) the encoded media into a playable file. This project implements a WebM/Matroska multiplexer in pure TypeScript, which is high-quality, fast and tiny, and supports video, audio and subtitles as well as live-streaming.

Demo: Muxing into a file

Demo: Streaming

Note: If you're looking to create MP4 files, check out mp4-muxer, the sister library to webm-muxer.

Quick start

The following is an example for a common usage of this library:

import { Muxer, ArrayBufferTarget } from 'webm-muxer';

let muxer = new Muxer({
    target: new ArrayBufferTarget(),
    video: {
        codec: 'V_VP9',
        width: 1280,
        height: 720
    }
});

let videoEncoder = new VideoEncoder({
    output: (chunk, meta) => muxer.addVideoChunk(chunk, meta),
    error: e => console.error(e)
});
videoEncoder.configure({
    codec: 'vp09.00.10.08',
    width: 1280,
    height: 720,
    bitrate: 1e6
});

/* Encode some frames... */

await videoEncoder.flush();
muxer.finalize();

let { buffer } = muxer.target; // Buffer contains final WebM file

Motivation

This library was created to power the in-game video renderer of the browser game Marble Blast Web - here you can find a video completely rendered by it and muxed with this library. Previous efforts at in-browser WebM muxing, such as webm-writer-js or webm-muxer.js, were either lacking in functionality or were way too heavy in terms of byte size, which prompted the creation of this library.

Installation

Using NPM, simply install this package using

npm install webm-muxer

You can import all exported classes like so:

import * as WebMMuxer from 'webm-muxer';
// Or, using CommonJS:
const WebMMuxer = require('webm-muxer');

Alternatively, you can simply include the library as a script in your HTML, which will add a WebMMuxer object, containing all the exported classes, to the global object, like so:

<script src="build/webm-muxer.js"></script>

Usage

Initialization

For each WebM file you wish to create, create an instance of Muxer like so:

import { Muxer } from 'webm-muxer';

let muxer = new Muxer(options);

The available options are defined by the following interface:

interface MuxerOptions {
    target:
        | ArrayBufferTarget
        | StreamTarget
        | FileSystemWritableFileStreamTarget,

    video?: {
        codec: string,
        width: number,
        height: number,
        frameRate?: number, // Optional, adds metadata to the file
        alpha?: boolean // If the video contains transparency data
    },

    audio?: {
        codec: string,
        numberOfChannels: number,
        sampleRate: number,
        bitDepth?: number // Mainly necessary for PCM-coded audio
    },

    subtitles?: {
        codec: string
    },

    streaming?: boolean,

    type?: 'webm' | 'matroska',

    firstTimestampBehavior?: 'strict' | 'offset' | 'permissive'
}

Codecs officially supported by WebM are:
Video: V_VP8, V_VP9, V_AV1
Audio: A_OPUS, A_VORBIS
Subtitles: S_TEXT/WEBVTT

target

This option specifies where the data created by the muxer will be written. The options are:

  • ArrayBufferTarget: The file data will be written into a single large buffer, which is then stored in the target.

    import { Muxer, ArrayBufferTarget } from 'webm-muxer';
    
    let muxer = new Muxer({
        target: new ArrayBufferTarget(),
        // ...
    });
    
    // ...
    
    muxer.finalize();
    let { buffer } = muxer.target;
  • StreamTarget: This target defines callbacks that will get called whenever there is new data available - this is useful if you want to stream the data, e.g. pipe it somewhere else. The constructor has the following signature:

    constructor(options: {
        onData?: (data: Uint8Array, position: number) => void,
        onHeader?: (data: Uint8Array, position: number) => void,
        onCluster?: (data: Uint8Array, position: number, timestamp: number) => void,
        chunked?: boolean,
        chunkSize?: number
    });

    onData is called for each new chunk of available data. The position argument specifies the offset in bytes at which the data has to be written. Since the data written by the muxer is not entirely sequential, make sure to respect this argument.

    When using chunked: true, data created by the muxer will first be accumulated and only written out once it has reached sufficient size. This is useful for reducing the total amount of writes, at the cost of latency. It using a default chunk size of 16 MiB, which can be overridden by manually setting chunkSize to the desired byte length.

    If you want to use this target for live-streaming, make sure to also set streaming: true in the muxer options. This will ensure that data is written monotonically (sequentially) and already-written data is never "patched" - necessary for live-streaming, but not recommended for muxing files for later viewing.

    The onHeader and onCluster callbacks will be called for the file header and each Matroska cluster, respectively. This way, you don't need to parse them out yourself from the data provided by onData.

    import { Muxer, StreamTarget } from 'webm-muxer';
    
    let muxer = new Muxer({
        target: new StreamTarget({
            onData: (data, position) => { /* Do something with the data */ }
        }),
        // ...
    });
  • FileSystemWritableFileStreamTarget: This is essentially a wrapper around a chunked StreamTarget with the intention of simplifying the use of this library with the File System Access API. Writing the file directly to disk as it's being created comes with many benefits, such as creating files way larger than the available RAM.

    You can optionally override the default chunkSize of 16 MiB.

    constructor(
        stream: FileSystemWritableFileStream,
        options?: { chunkSize?: number }
    );

    Usage example:

    import { Muxer, FileSystemWritableFileStreamTarget } from 'webm-muxer';
    
    let fileHandle = await window.showSaveFilePicker({
        suggestedName: `video.webm`,
        types: [{
            description: 'Video File',
            accept: { 'video/webm': ['.webm'] }
        }],
    });
    let fileStream = await fileHandle.createWritable();
    let muxer = new Muxer({
        target: new FileSystemWritableFileStreamTarget(fileStream),
        // ...
    });
    
    // ...
    
    muxer.finalize();
    await fileStream.close(); // Make sure to close the stream

streaming (optional)

Configures the muxer to only write data monotonically, useful for live-streaming the WebM as it's being muxed; intended to be used together with the target set to type function. When enabled, some features such as storing duration and seeking will be disabled or impacted, so don't use this option when you want to write out WebM file for later use.

type (optional)

As WebM is a subset of the more general Matroska multimedia container format, this library muxes both WebM and Matroska files. WebM, according to the official specification, supports only a small subset of the codecs supported by Matroska. It is likely, however, that most players will successfully play back a WebM file with codecs other than the ones supported in the spec. To be on the safe side, however, you can set the type option to 'matroska', which will internally label the file as a general Matroska file. If you do this, your output file should also have the .mkv extension.

firstTimestampBehavior (optional)

Specifies how to deal with the first chunk in each track having a non-zero timestamp. In the default strict mode, timestamps must start with 0 to ensure proper playback. However, when directly pumping video frames or audio data from a MediaTrackStream into the encoder and then the muxer, the timestamps are usually relative to the age of the document or the computer's clock, which is typically not what we want. Handling of these timestamps must be set explicitly:

  • Use 'offset' to offset the timestamp of each video track by that track's first chunk's timestamp. This way, it starts at 0.
  • Use 'permissive' to allow the first timestamp to be non-zero.

Muxing media chunks

Then, with VideoEncoder and AudioEncoder set up, send encoded chunks to the muxer using the following methods:

addVideoChunk(
    chunk: EncodedVideoChunk,
    meta?: EncodedVideoChunkMetadata,
    timestamp?: number
): void;

addAudioChunk(
    chunk: EncodedAudioChunk,
    meta?: EncodedAudioChunkMetadata,
    timestamp?: number
): void;

Both methods accept an optional, third argument timestamp (microseconds) which, if specified, overrides the timestamp property of the passed-in chunk.

The metadata comes from the second parameter of the output callback given to the VideoEncoder or AudioEncoder's constructor and needs to be passed into the muxer, like so:

let videoEncoder = new VideoEncoder({
    output: (chunk, meta) => muxer.addVideoChunk(chunk, meta),
    error: e => console.error(e)
});
videoEncoder.configure(/* ... */);

Should you have obtained your encoded media data from a source other than the WebCodecs API, you can use these following methods to directly send your raw data to the muxer:

addVideoChunkRaw(
    data: Uint8Array,
    type: 'key' | 'delta',
    timestamp: number, // In microseconds
    meta?: EncodedVideoChunkMetadata
): void;

addAudioChunkRaw(
    data: Uint8Array,
    type: 'key' | 'delta',
    timestamp: number, // In microseconds
    meta?: EncodedAudioChunkMetadata
): void;

Finishing up

When encoding is finished and all the encoders have been flushed, call finalize on the Muxer instance to finalize the WebM file:

muxer.finalize();

When using an ArrayBufferTarget, the final buffer will be accessible through it:

let { buffer } = muxer.target;

When using a FileSystemWritableFileStreamTarget, make sure to close the stream after calling finalize:

await fileStream.close();

Details

Video key frame frequency

Canonical WebM files can only have a maximum Matroska Cluster length of 32.768 seconds, and each cluster must begin with a video key frame. You therefore need to tell your VideoEncoder to encode a VideoFrame as a key frame at least every 32 seconds, otherwise your WebM file will be incorrect. You can do this by doing:

videoEncoder.encode(frame, { keyFrame: true });

Media chunk buffering

When muxing a file with a video and an audio track, it is important that the individual chunks inside the WebM file be stored in monotonically increasing time. This does mean, however, that the multiplexer must buffer chunks of one medium if the other medium has not yet encoded chunks up to that timestamp. For example, should you first encode all your video frames and then encode the audio afterwards, the multiplexer will have to hold all those video frames in memory until the audio chunks start coming in. This might lead to memory exhaustion should your video be very long. When there is only one media track, this issue does not arise. So, when muxing a multimedia file, make sure it is somewhat limited in size or the chunks are encoded in a somewhat interleaved way (like is the case for live media).

Subtitles

This library supports adding a subtitle track to a file. Like video and audio, subtitles also need to be encoded before they can be added to the muxer. To do this, this library exports its own SubtitleEncoder class with a WebCodecs-like API. Currently, it only supports encoding WebVTT files.

Here's a full example using subtitles:

import { Muxer, SubtitleEncoder, ArrayBufferTarget } from 'webm-muxer';

let muxer = new Muxer({
    target: new ArrayBufferTarget(),
    subtitles: {
        codec: 'S_TEXT/WEBVTT'
    },
    // ....
});

let subtitleEncoder = new SubtitleEncoder({
    output: (chunk, meta) => muxer.addSubtitleChunk(chunk, meta),
    error: e => console.error(e)
});
subtitleEncoder.configure({
    codec: 'webvtt'
});

let simpleWebvttFile =
`WEBVTT

00:00:00.000 --> 00:00:10.000
Example entry 1: Hello <b>world</b>.
`;
subtitleEncoder.encode(simpleWebvttFile);

// ...

muxer.finalize();

You do not need to encode an entire WebVTT file in one go; you can encode individual cues or any number of them at once. Just make sure that the preamble (the part before the first cue) is the first thing to be encoded.

Size "limits"

This library can mux WebM files up to a total size of ~4398 GB and with a Matroska Cluster size of ~34 GB.

Implementation & development

WebM files are a subset of the more general Matroska media container format. Matroska in turn uses a format known as EBML (think of it like binary XML) to structure its file. This project therefore implements a simple EBML writer to create the Matroska elements needed to form a WebM file. Many thanks to webm-writer-js for being the inspiration for most of the core EBML writing code.

For development, clone this repository, install everything with npm install, then run npm run watch to bundle the code into the build directory. Run npm run check to run the TypeScript type checker, and npm run lint to run ESLint.

webm-muxer's People

Contributors

happylinks avatar vanilagy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

webm-muxer's Issues

Chrome throwing error and failing to export video

I had code that was working, I don't think anything changed but I'm now getting this error.

A VideoFrame was garbage collected without being closed. Applications should call close() on frames when done with them to prevent stalls.

I'm not explicitly calling close() anywhere but neither are you in your example code.

I'm using the newest published version of this library and passing canvas frames into code like this.


export default class WebM {
  constructor(width, height, transparent = true, fps) {
    this.muxer = new Muxer({
      target: new ArrayBufferTarget(),
      video: {
        codec: 'V_VP9',
        width: width,
        height: height,
        frameRate: fps,
        alpha: transparent,
      },
      audio: undefined,
      firstTimestampBehavior: 'offset',
    });


    this.videoEncoder = new VideoEncoder({
      output: (chunk, meta) => this.muxer.addVideoChunk(chunk, meta),
      error: (error) => reject(error),
    });
    this.videoEncoder.configure({
      codec: 'vp09.00.10.08',
      width: width,
      height: height,
      bitrate: 1e6,
    });
  }

  addFrame(frame, time, frameIndex) {
    return new Promise((resolve) => {
      this.videoEncoder.encode(new VideoFrame(frame, { timestamp: time * 1000 }), {
        keyFrame: !(frameIndex % 50),
      });
      resolve();
    });
  }

  generate() {
    return new Promise((resolve, reject) => {
      this.videoEncoder
        .flush()
        .then(() => {
          this.muxer.finalize();
          resolve(new Blob([this.muxer.target.buffer], { type: 'video/webm' }));
        })
        .catch(reject);
    });
  }
}

Help with multimedia file (muxing audio and video)

Hi,

I'm with a problem on the last part of transform a mp4 file into a webm file.

I could not understand your explanation on this part of your doc: https://github.com/Vanilagy/webm-muxer#media-chunk-buffering.

I'm trying to pass to muxer the encoded parts of audio and video in a interleaved way, but it is not working.

The image below shows the order that I am sending the encoded chunks to the muxer. The number on front of the label is the chunk timestamp. I tried some orders and the resulted file doesn't play.

image

Do you have any suggesting about this?

Thank you.

Need help with YouTube Live Ingest using HLS

Hi there,

I'm currently working on a project that involves live streaming to YouTube using the HLS protocol. I came across your webm-muxer library and was impressed with its performance and simplicity.

However, I'm having trouble figuring out how to use it with YouTube's Live Ingest feature. I was wondering if you could provide some guidance or examples on how to do this.

Any help would be greatly appreciated!

Thank you.

Best regards.

Webm plays way too fast when encoded on iOs 16.4

Hello there
Not sure if it has something to do with webm-muxer, which is awesome by the way.

I‘m using your library with VideoFrame from a live webrtc feed. I‘m encoding it with vp8 and vp9 codec using WebCodec API. In the next version of safari (16.4) WebCodec API is also available in safari.

It works great on android, but when i create webm videos from safari, the resulting videos play very fast. Also i cannot open it in VLC, it plays in windows native player though.

I tried reducing the framerate that the VideoFrames are pushed to the encoder so that the result is about 15fps, again working as expected on Android.
Do you have any hints regarding this? As i understand, setting the fps param in webmmuxer is only informative (metadata) right?
Because i send each frame manually, I dont think the fps param of VideoEncoder has any influence.

Support mobile chrome

Demo is failed in chrome mobile(110 & 111) with errors:

DOMException: Input audio buffer is incompatible with codec parameters
Uncaught (in promise) DOMException: Failed to execute 'encode' on 'AudioEncoder': Cannot call 'encode' on a closed codec

Looks like it should work
https://caniuse.com/webcodecs

Video is flickering, seems to be related to slow Android device

Hi,

Tried to use the muxer on a slower Android device and the resulting video flickers, almost seems like every second a frame is displayed from a couple milliseconds ago.

The video has a 1080 × 1920 resolution and is MPEG-4 AAC H.264 encoded.

Tried if videoBitrate had an effect but didn't seem to make any difference. Encoding on MacOS works correctly. This is with V_VP9 and vp09.00.10.08

I'm sorry I don't have any more specific details. Is there any reason this might happen? Anything I can configure to improve the output?

Can microphone audio be synthesized when recording screen?

Hi there! I'm wondering how to use this library for screen recording since I'm not using Canvas. Also, I'll be speaking into a microphone while recording and I'd like to merge the audio from the microphone with the video. Can you guide me on how to do that? Thanks!

	import { Muxer, ArrayBufferTarget } from 'webm-muxer';

	let audioTrack: MediaStreamTrack;
	let audioTrack1: MediaStreamTrack;
	let audioEncoder: AudioEncoder | null;
	let videoEncoder: VideoEncoder | null;
	let muxer: Muxer<ArrayBufferTarget> | null;

	async function start() {
		let userMedia = await navigator.mediaDevices.getUserMedia({ video: false, audio: true });
		let _audioTrack = userMedia.getAudioTracks()[0];
		let audioSampleRate = _audioTrack?.getCapabilities().sampleRate?.max || 22050;

		let displayMedia = await navigator.mediaDevices.getDisplayMedia({ video: true, audio: true });
		let _audioTrack1 = displayMedia.getAudioTracks()[0];
		let audioSampleRate1 = _audioTrack1?.getCapabilities().sampleRate?.max || audioSampleRate;

		let _muxer = new Muxer({
			target: new ArrayBufferTarget(),
			video: {
				codec: 'V_VP9',
				width: 1280,
				height: 720
			},
			audio: {
				codec: 'A_OPUS',
				sampleRate: audioSampleRate1,
				numberOfChannels: 1
			},
			firstTimestampBehavior: 'offset' // Because we're directly piping a MediaStreamTrack's data into it
		});

		let _videoEncoder = new VideoEncoder({
			output: (chunk, meta) => _muxer.addVideoChunk(chunk, meta),
			error: (e) => console.error(e)
		});
		_videoEncoder.configure({
			codec: 'vp09.00.10.08',
			width: 1280,
			height: 720,
			bitrate: 1e6
		});

		let _audioEncoder = new AudioEncoder({
			output: (chunk, meta) => _muxer.addAudioChunk(chunk, meta),
			error: (e) => console.error(e)
		});
		_audioEncoder.configure({
			codec: 'opus',
			numberOfChannels: 1,
			sampleRate: audioSampleRate1,
			bitrate: 64000
		});

		writeAudioToEncoder(_audioEncoder, _audioTrack);
		writeAudioToEncoder(_audioEncoder, _audioTrack1);

		muxer = _muxer;
		audioEncoder = _audioEncoder;
		audioTrack = _audioTrack;
		audioTrack1 = _audioTrack1;
	}

	function writeAudioToEncoder(audioEncoder: AudioEncoder, audioTrack: MediaStreamTrack) {
		// Create a MediaStreamTrackProcessor to get AudioData chunks from the audio track
		let trackProcessor = new MediaStreamTrackProcessor({ track: audioTrack });
		let consumer = new WritableStream({
			write(audioData) {
				audioEncoder.encode(audioData);
				audioData.close();
			}
		});
		trackProcessor.readable.pipeTo(consumer);
	}

	let frameCounter = 0;
	function encodeVideoFrame(videoEncoder: VideoEncoder) {
		let frame = new VideoFrame(canvas, {
			timestamp: ((frameCounter * 1000) / 30) * 1000
		});

		frameCounter++;

		videoEncoder.encode(frame, { keyFrame: frameCounter % 30 === 0 });
		frame.close();
	}

	const endRecording = async () => {
		audioTrack?.stop();
		audioTrack1?.stop();

		await audioEncoder?.flush();
		await videoEncoder?.flush();
		muxer?.finalize();

		if (muxer) {
			let { buffer } = muxer.target;
			downloadBlob(new Blob([buffer]));
		}

		audioEncoder = null;
		videoEncoder = null;
		muxer = null;
	};

	const downloadBlob = (blob: Blob) => {
		let url = window.URL.createObjectURL(blob);
		let a = document.createElement('a');
		a.style.display = 'none';
		a.href = url;
		a.download = 'picasso.webm';
		document.body.appendChild(a);
		a.click();
		window.URL.revokeObjectURL(url);
	};

I have a couple of questions. Can this library merge two audio segments into one media file? And is it possible to process videos without using Canvas?

Stream to web storage

Hi @Vanilagy

Just wanted to check if there is any way that we can use the streaming option to stream data to web storage like IndexedDB.
I would like to avoid array buffer target as the in-memory usage may increase for larger videos but I do not have the flexibility to prompt for saving to a file hence was wondering if an in-memory streaming option is available somehow

Thanks,
Neeraj

Encode alpha video to WebM ?

Chrome 31 now supports video alpha transparency in WebM.
"webm-muxer" How to Encoder alpha videos ?

VideoEncoderConfig:
alpha: 'keep', // keep alpha channel

It doesn't work

Writing to disk via node

Hi @Vanilagy!
First of all, thank you for creating this amazing library and for the active maintenance. This is super helpful for a use case that I have been working on.

I was following this comment and had a couple of doubts:

  1. For writing to disk in node.js environment, would you suggest using StreamTarget with a chunked approach or without the chunked approach and why so?
  2. For StreamTarget, is backpressure being handled by default via the library as writing to the WriteStream will require this capability to be present at some point so that the writes happen efficiently.

Another quick question, is it possible to increase the width, height or bitrate configurations of the Muxer midway

Thanks,
Neeraj

Audio as track 1?

When encoding audio-only content, is the audio set to track 1? How to set to track 1?

Help getting audio from audio context working

I am wondering if anyone can help me mux video and audio (not from the microphone) together? Below is a snippet of some of the code I am using inside a cables.gl op file. I have managed to feed canvas frames one by one to the video to get perfectly formed videos with no missing frames. However when I add the audio the video is not viewable, when I ffmpeg convert it to mp4 there is no audio.

              const audioCtx = CABLES.WEBAUDIO.createAudioContext(op);
            const streamAudio = audioCtx.createMediaStreamDestination();

            inAudio.get().connect(streamAudio); <-- this gets fed from an audio source in cables

      audioTrack = streamAudio.stream;
      recorder = new MediaRecorder(audioTrack);

    	muxer = new WebMMuxer({
        "target": "buffer",
        "video": {
            "codec": "V_VP9",
            "width": inWidth.get() / CABLES.patch.cgl.pixelDensity,
            "height": inHeight.get() / CABLES.patch.cgl.pixelDensity,
            "frameRate": fps
        },
        "audio": {
            "codec": "A_OPUS",
            "sampleRate": 48000,
            "numberOfChannels": 2
        },
        "firstTimestampBehavior": "offset" // Because we're directly pumping a MediaStreamTrack's data into it
    });

    videoEncoder = new VideoEncoder({
        "output": (chunk, meta) => { return muxer.addVideoChunk(chunk, meta); },
        "error": (e) => { return op.error(e); }
    });
    videoEncoder.configure({
        "codec": "vp09.00.10.08",
        "width": inWidth.get() / CABLES.patch.cgl.pixelDensity,
        "height": inHeight.get() / CABLES.patch.cgl.pixelDensity,
        "framerate": 29.7,
        "bitrate": 5e6
    });

    	if (audioTrack) {
    	    op.log('we HAVE AUDIO !!!!!!!!!!!!!!!!!!')

/* I REMOVED ALLL THE CODE FROM THE DEMO FROM HERE

// 		const audioEncoder = new AudioEncoder({
// 			output: (chunk) => muxer.addRawAudioChunk(chunk),
// 			error: e => console.error(e)
// 		});
// 		audioEncoder.configure({
// 			codec: 'opus',
// 			numberOfChannels: 2,
// 			sampleRate: 48000, //todo should have a variable
// 			bitrate: 128000,
// 		});


		// Create a MediaStreamTrackProcessor to get AudioData chunks from the audio track
// 		let trackProcessor = new MediaStreamTrackProcessor({ track: audioTrack });
// 		let consumer = new WritableStream({
// 			write(audioData) {
// 				if (!recording) return;
// 				audioEncoder.encode(audioData);
// 				audioData.close();
// 			}
// 		});
// 		trackProcessor.readable.pipeTo(consumer);

TO HERE */

      recorder.ondataavailable = function(e){
          op.log('test', e.data) <-- this returns a blob {size: 188409, type: 'audio/webm;codecs=opus'}
          //audioEncoder.encode(e.data);
          muxer.addAudioChunkRaw(e.data) <-- this throws no errors
      }
      recorder.start()
}

Timeline for Firefox VideoEncoder support

This is a question somewhat unrelated to this library, but:

I know most browsers have a public ticket system that allows devs to track the progress of features being added/fixed. I looked everywhere yesterday and couldn't find any mention of a timeline for VideoEncoder support in Firefox, like if it was even on their radar or not.

Do you know where to look for this? Are you in any secret discords where they talk about it?

Love your library by the way, was a breeze to implement & use, with no headaches yet.

Using with nodejs fs

Hi Vanilagy,

For an electronjs app, I have to stream the creation of a video without being able to use the Web File System API.

So I use "fs" and I wanted to know if there is a possibility to stream like the Web File System API? Currently I'm using the buffer but it's not ideal because I have long 4K videos.

Do you have the possibility to do something?

Thank you !

[FR] Support more input types

I want to use this library to re-mux a raw H.264 stream into a WebM file (because WebM has better support among media players than raw H.264 stream).

Because I already have an encoded stream, I don't need (or want) WebCodecs API to be involved (browser compatibility is another concern).

But currently, this library does an instanceof test against EncodedVideoChunk here:

trackNumber: externalChunk instanceof EncodedVideoChunk ? VIDEO_TRACK_NUMBER : AUDIO_TRACK_NUMBER

I know I can construct EncodedVideoChunks with my encoded data, but ideally, I want to supply the buffer directly to this library, saving the extra memory allocation and copying.

I tried to modify this library like this:

diff --git a/src/main.ts b/src/main.ts
index 3109e82..840d756 100644
--- a/src/main.ts
+++ b/src/main.ts
@@ -226,7 +226,7 @@ class WebMMuxer {

 		this.writeVideoDecoderConfig(meta);

-		let internalChunk = this.createInternalChunk(chunk, timestamp);
+    let internalChunk = this.createInternalChunk(chunk, 'video', timestamp);
 		if (this.options.video.codec === 'V_VP9') this.fixVP9ColorSpace(internalChunk);

 		/**
@@ -328,12 +328,12 @@ class WebMMuxer {
 		}[this.colorSpace.matrix];
 		writeBits(chunk.data, i+0, i+3, colorSpaceID);
 	}

 	public addAudioChunk(chunk: EncodedAudioChunk, meta: EncodedAudioChunkMetadata, timestamp?: number) {
 		this.ensureNotFinalized();
 		if (!this.options.audio) throw new Error("No audio track declared.");

-		let internalChunk = this.createInternalChunk(chunk, timestamp);
+    let internalChunk = this.createInternalChunk(chunk, 'audio', timestamp);

 		// Algorithm explained in `addVideoChunk`
 		this.lastAudioTimestamp = internalChunk.timestamp;
@@ -356,7 +356,7 @@ class WebMMuxer {
 	}

 	/** Converts a read-only external chunk into an internal one for easier use. */
-	private createInternalChunk(externalChunk: EncodedVideoChunk | EncodedAudioChunk, timestamp?: number) {
+  private createInternalChunk(externalChunk: EncodedVideoChunk | EncodedAudioChunk, trackType: 'video' | 'audio', timestamp?: number) {
 		let data = new Uint8Array(externalChunk.byteLength);
 		externalChunk.copyTo(data);

@@ -364,7 +364,7 @@ class WebMMuxer {
 			data,
 			timestamp: timestamp ?? externalChunk.timestamp,
 			type: externalChunk.type,
-			trackNumber: externalChunk instanceof EncodedVideoChunk ? VIDEO_TRACK_NUMBER : AUDIO_TRACK_NUMBER
+      trackNumber: trackType === 'video' ? VIDEO_TRACK_NUMBER : AUDIO_TRACK_NUMBER
 		};

 		return internalChunk;

So I can give it plain objects. I haven't modified it to take buffers directly.

Here is my consuming code:

https://github.com/yume-chan/ya-webadb/blob/eaf3a7a3c829ebdbd4e1608c4cc0f3caf623f180/apps/demo/src/components/scrcpy/recorder.ts#L77-L100

        const sample = h264StreamToAvcSample(frame.data);
        this.muxer!.addVideoChunk(
            {
                byteLength: sample.byteLength,
                timestamp,
                type: frame.keyframe ? "key" : "delta",
                // Not used
                duration: null,
                copyTo: (destination) => {
                    // destination is a Uint8Array
                    (destination as Uint8Array).set(sample);
                },
            },
            {
                decoderConfig: this.configurationWritten
                    ? undefined
                    : {
                          // Not used
                          codec: "",
                          description: this.avcConfiguration,
                      },
            }
        );
        this.configurationWritten = true;

Recording a <canvas> and audio stream

Love this! I'm still new to video processing so I'm not sure if this is possible.

My goal is to apply filters, trim, and draw on top of a video.

I have a <video> element as source (that has an audio track).

By updating the currentTime and listening to "seeked" I've successfully managed to record video frames for a section a given video (for example timestamp 2000 to 3500). This works perfectly and is a lot faster than using the MediaRecorder.

Now I also want to add the correct section of the AudioTrack and that's where I'm kind of lost?

I've tried to use the method in this issue and in the canvas drawing demo but it doesn't seem to work. The WritableStream write function gets called but the chunks in the AudioEncoder output have a byteLenght of only 3 which seems incorrect.

If you could give me a pointer in the right direction that would be amazing.

Also, happy to support this project, so if you have a donation link, please let me know. 🙏

Support safari

Safari is add webcodecs support(video only for now) in latest dev releases (https://caniuse.com/webcodecs)

Сurrently demo is failing with error:

[Error] Unhandled Promise Rejection: ReferenceError: Can't find variable: AudioEncoder

read webm

Can I increase the read frame data in webm format to match WebCodecs to draw and play canvas?

VideoDecoder;

`
(........ webm track, frame, chunk? ........ )

videoDecoder.decode(chunk);

`

StreamTarget onDone not available anymore since v4.0.0

Hi, we noticed that you removed the onDone method of StreamTarget in v4.0.0. Is there an alternative way to know reliably when all data has passed through the muxer once muxer.finalize() has been called?
We forward the data send via StreamTarget to a file, but can't use FileSystemWritableFileStream directly for various reasons and used the onDone method to trigger to close the file handle.

webmmuxer throws Matroska cluster too big error even with 10 seconds keyframe interval.

I have a media pipeline in which the encoder stage feeds the recording stage, the recording stage uses mp4/webmmuxer to write local media file. I was using mp4muxer and every thing works fine. But , when i switch to webmmuxer i get this below error and the recorder refuses to write the file. I am inserting keyframes every 10 milliseconds. I wonder whether some thing wrong with my usage. Are there any extra options we need to pass compared to mp4 muxer ?

Current Matroska cluster exceeded its maximum allowed length of 32768 milliseconds. In order to produce a correct WebM file, you must pass in a video key frame at least every 32768 milliseconds.

please advise.
Thanks

How to use with PHP?

Hello maintainers and community members,

I'm currently working on a project that uses PHP for the backend and would like to take advantage of the webm-muxer TypeScript package for WebM/Matroska multiplexing. Given the advantages of this package, including its speed, size, and support for both video and audio as well as live-streaming, I believe it could greatly benefit our workflow.

Here are my main questions and areas of concern:

  1. Node.js Bridge: Considering webm-muxer is written in TypeScript, is there a recommended approach to call the functions from PHP, possibly via a Node.js bridge? Has anyone successfully integrated it using solutions like phpexecjs or others?
  2. Real-time Performance: When using it with PHP, especially in a real-time environment like live-streaming, are there any performance bottlenecks or challenges we should anticipate?
  3. Temporary Storage: For large video/audio files, temporary storage might be a concern. Does webm-muxer have any built-in utilities for managing temporary files, or would this have to be managed entirely on the PHP side?
  4. Concurrency: PHP can spawn multiple processes or threads (using solutions like pthreads). How thread-safe is webm-muxer in concurrent scenarios?
  5. API Wrapper: Is there an existing PHP wrapper for the webm-muxer API, or would it be recommended to build a custom wrapper tailored to our application's needs?
  6. Error Handling: How does webm-muxer report errors, and what would be the best way to catch and handle these errors on the PHP side?
  7. Updates & Maintenance: With potential updates to webm-muxer, what's the best approach to ensure that the PHP integration remains stable and up-to-date?

I appreciate any feedback, examples, or pointers from those who have attempted or succeeded in such an integration. Thank you in advance for your help and insights!

Dynamic browser support

Using this library, I generate videos on the fly.

Then, I try to play the video in the browser.

In chrome desktop, it works, but on safari (desktop/mobile) or chrome mobile it doesn't.
See example file:
test.webm

I could not yet figure out why this is the case. I would like this library to support an isPlayable method, that given a Muxer determines if the video is playable or not. It should return either true, false, or null.

Quick mock implementation:

async function isPlayable() {
    if (!('mediaCapabilities' in navigator)) {
      return null;
     // or maybe use `canPlayType` as a fallback. or `MediaSource.isTypeSupported(mimeType)`
    }

    const videoConfig = {
      contentType: 'video/webm; codecs="vp09.00.10.08"', // replace with codec
      width: 1280,  // Replace with actual width
      height: 720,  // Replace with actual height
      bitrate: 1000000,  // Replace with actual bitrate
      framerate: 25,  // Replace with actual frame rate
      hasAlphaChannel: true, // replace with alpha
    };
   // {"powerEfficient":true,"smooth":true,"supported":true,"supportedConfiguration":{"video":{"bitrate":1000000,"contentType":"video/webm; codecs=\"vp09.00.10.08\"","framerate":25,"height":720,"width":1280},"type":"file"}}
    const result = await navigator.mediaCapabilities.decodingInfo({type: 'file', video: videoConfig});
    return result.supported; 
}

Or maybe, for example here, we specify hasAlphaChannel: true but the supportedConfiguration says no alpha is supported, we might be able to makePlayable by using the specified configuration

Variable frame rate

Hello,

Before starting I would like to thank you for webm-muxer.

I wanted to know if it is normal that the framerate of your demo has a changing framerate between each file while in the source code it is well marked frameRate: 30 ?

In VLC it's written for both videos "30.000300" but in After Effects I have 29.042 fps for the first file and 30.512 fps for the second.

Is there a possibility to have a video file with a fixed framerate?

Strange duration when inside web worker

Hi,

The muxer works great outside the webworker but when I put it inside a webworker the video duration is really weird.

Once the page is loaded, if I wait 10s before starting recording, the video duration will be 10s + recorded video time. The second strange thing is that the video in the player will start at 10s (and not 0s) but impossible to return before 10s.

And if I record a new video after the other video, say after 60s, the video when finished recording will be 64s, etc.

When I reload the page the "bug" starts from zero but increases according to the time I stay on the page.

After trying for days and days, reading all the documents on the subject and trying all possible examples, believing I was doing something wrong, I tried the webm-writer library modified by the WebCodecs team [example](https://github.com/w3c/webcodecs/tree/704c167b81876f48d448a38fe47a3de4bad8bae1/ samples/capture-to-file) and everything works normally.

Do you have any idea what the problem is or am I doing something wrong?

Some exemple code

  function start() {
    const [ track ] = stream.value.getTracks()
    const trackSettings = track.getSettings()
    const processor = new MediaStreamTrackProcessor(track)
    inputStream = processor.readable

    worker.postMessage({
      type: 'start',
      config: {
        trackSettings,
        codec,
        framerate,
        bitrate,
      },
      stream: inputStream
    }, [ inputStream ])

    isRecording.value = true

    stopped = new Promise((resolve, reject) => {
      worker.onmessage = ({data: buffer}) => {
        const blob = new Blob([buffer], { type: mimeType })
        worker.terminate()
        resolve(blob)
      }
    })
  }

Worker.js

import '@workers/webm-writer'

let muxer
let frameReader

self.onmessage = ({data}) => {
  switch (data.type) {
    case 'start': start(data); break;
    case 'stop': stop(); break;
  }
}

async function start({ stream, config }) {
  let encoder
  let frameCounter = 0

  muxer = new WebMWriter({
    codec: 'VP9',
    width: config.trackSettings.width,
    height: config.trackSettings.height
  })

  frameReader = stream.getReader()

  encoder = new VideoEncoder({
    output: chunk => muxer.addFrame(chunk),
    error: ({message}) => stop()
  })

  const encoderConfig = {
    codec: config.codec.encoder,
    width: config.trackSettings.width,
    height: config.trackSettings.height,
    bitrate: config.bitrate,
    avc: { format: "annexb" },
    framerate: config.framerate,
    latencyMode: 'quality',
    bitrateMode: 'constant',
  }

  const encoderSupport = await VideoEncoder.isConfigSupported(encoderConfig)
  if (encoderSupport.supported) {
    console.log('Encoder successfully configured:', encoderSupport.config)
    encoder.configure(encoderSupport.config)
  } else {
    console.log('Config not supported:', encoderSupport.config)
  }

  frameReader.read().then(async function processFrame({ done, value }) {
    let frame = value

    if (done) {
      await encoder.flush()
      const buffer = muxer.complete()
      postMessage(buffer)
      encoder.close()
      return
    }

    if (encoder.encodeQueueSize <= config.framerate) {
      if (++frameCounter % 20 == 0) {
        console.log(frameCounter + ' frames processed');
      }
      const insert_keyframe = (frameCounter % 150) == 0
      encoder.encode(frame, { keyFrame: insert_keyframe })
    }

    frame.close()
    frameReader.read().then(processFrame)
  })
}

async function stop() {
  await frameReader.cancel()
  const buffer = await muxer.complete()
  postMessage(buffer)
  frameReader = null
}

Screenshot

Video sample 1

Videocapture

Video sample 2

Capture d’écran 2023-02-24 à 00 35 54

Is it possible to get file buffer before .finalize() is called?

First of all - thank you for creating this amazing lib! I'm going to use it in the https://screen.studio rendering & encoding pipeline.

In my pipeline, I need to transcode the .webm file into .mp4 (I hoped the vp9 codec could be used directly in .mp4 without transcoding, but it will not play on QuickTime on macOS).

What I can do is wait for the .webm file to be ready and then start transcoding. This will work, but as export speed is critical for me, I'd like to already start transcoding even before the .webm video file is ready (aka all video chunks being added).

Thus my question is - is it possible to get a file data buffer while I'm adding video chunks so I can already pass it to ffmpeg? This would allow me to parallelize encoding .webm and transcoding it to mp4.

Thank you!

is one chunk equals to one segment ?

I have a question on the streamer interface, does onData callback is given 'complete' clusters, like one full cluster ? is one media cluster encapsulated inside one chunk ? This is needed for live streaming.

Live streaming support

Would it be possible to get chunks of muxed data for live streaming? I use webm streaming created from ffmpeg (to an icecast2 server) and it works fine (I can play in a html5 video ou audio tag without problem), even thou the webm standard was not really conceived for live streaming...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.