Git Product home page Git Product logo

anthumchris / fetch-stream-audio Goto Github PK

View Code? Open in Web Editor NEW
348.0 11.0 23.0 945 KB

Low Latency web audio playback examples for decoding audio streams in chunks with Fetch & Streams APIs

Home Page: https://fetch-stream-audio.anthum.com

License: MIT License

HTML 3.35% JavaScript 86.98% Shell 1.32% CSS 8.35%
fetch-api streams-api javascript stream javascript-fetch web-audio web-audio-api opus webassembly opus-codec

fetch-stream-audio's People

Contributors

anthumchris avatar dependabot[bot] avatar neo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fetch-stream-audio's Issues

save opus packet to ogg container

Hello!

I know it is off-topic. But it is related.

I have opus packets which is merged from 2 frames.
Size of packet can be great than 255 byte (max segment size in ogg page)
I can split this packet into two packets with single frame and save each in separated segment.

But... Can we simple split packet into parts\chunks 255 bytes + last part less then 255
ang put it in sequence segments into ogg page?

It doesnt work to me (maybe incorrect code). But I am interesting how to handle it properly? We really need to repack doubled packets to separated packet? or not?

Sorry for my bad english.

Demo is broken - wrong wasm mimetype

The wasm in the demo is served with content-type: application/wasm; charset=utf-8 which don't make sense and break the streaming compilation.
It should be served with the mimetype application/wasm (charset doesn't make sense for a binary format like wasm).

Reduce Number of AudioBuffers Created

Opus is decoding so quickly (20ms frames) that hundreds of AudioBuffer objects are being created. This causes performance issues for larger files and would generate up to 20,000 objects created for 5 minute files.

After the first frames are decoded and scheduled to be played, the remaining decoded frames do not need to be scheduled so quickly. Ideally, decoded audio would only be scheduled when needed (during seeking or a few seconds before it is to be played). Because this is a developer demo, a fast path of making AudioBuffers larger and fewer will be taken.

Add options for download throttle speed and Opus bitrate

Download speeds are currently fixed at 2mbps. Add UI options to change data rates to assess playback waiting times for Opus. Bitrate wouldn't apply to the Wav example, and slow speeds would only benefit Opus measurements since the Wav file requires much more bandwidth.

when I lock iphone screen, playback stops in Safari

that on demo site, haven't actually installed the software

also I don't see the phone indicating that is playing media, I only see the page contents appearing to play inside safari, but no sound comes up.

AudioBuffer Played doesn't increase after unlocking screen and play button remains as playing

Using webpack instead of parcel

I'm having trouble getting the example code to work when using webpack. This is what I've tried:

package.json:

{
  "name": "fetch-stream-audio",
  "private": true,
  "main": "app.js",
  "scripts": {
    "dev": "./scripts/dev.sh",
    "build": "./scripts/build.sh",
    "clean": "rm -rf build dist",
    "clean-all": "rm -rf .cache build dist node_modules package-lock.json",
    "build-webpack": "webpack --config webpack.config.js",
    "start": "webpack-dev-server"
  },
  "devDependencies": {
    "@babel/core": "^7.0.0-0",
    "@babel/plugin-proposal-class-properties": "^7.7.4",
    "@babel/plugin-transform-runtime": "^7.7.6",
    "babel-eslint": "^10.0.3",
    "cssnano": "^4.1.10",
    "eslint": "^7.1.0",
    "opus-stream-decoder": "^1.2.6",
    "parcel-bundler": "^1.12.4",

    "file-loader": "^5.0.2",
    "webpack": "^4.41.5",
    "webpack-cli": "^3.3.10",
    "webpack-dev-server": "^3.10.1",
    "worker-loader": "^2.0.0"
  },
  "dependencies": {
    "@babel/runtime": "^7.7.7",
    "lit-html": "^1.1.2",
    "tinyh264": "0.0.6",

    "wasm-loader": "^1.3.0",
    "babel-loader": "^8.2.2"
  }
}

webpack.conf.js:

const path = require('path')

module.exports = {
    mode: "development",
    entry: './src/js/app.js',
    output: {
        filename: 'main.js',
        path: path.resolve(__dirname, 'dist'),
    },
    resolve: {
        extensions: ['.mjs', '.wasm', '.js', '.json']
    },
    module: {
        rules: [
            {
                test: /\.wasm$/,
                loaders: ['wasm-loader']
            },
            {
                test: /\.mjs$/,
                exclude: /node_modules/,
                // type: "javascript/auto",
                use: {loader: 'babel-loader'}
            },
            {
                test: /\.worker\.js$/,
                use: {loader: 'worker-loader'}
            },
            {
                test: /\.(asset)$/i,
                use: [
                    {
                        loader: 'file-loader',
                        options: {
                            name: '[contenthash].wasm'
                        }
                    },
                ],
            }
        ]
    }
}

index.html was copied to /dist/

<html>
  <head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="icon" href="./favicon.ico">
  <meta property="og:image" content="https://fetch-stream-audio-stage.anthum.com/images/icon-share.png">

  <meta name="descripton" content="Low-latency, high quality web audio playback examples with Opus, Streams, and Fetch APIs">
  <meta name="twitter:description" content="Low-latency, high quality web audio playback examples with Opus, Streams, and Fetch APIs">
  <meta property="og:description" content="Low-latency, high quality web audio playback examples with Opus, Streams, and Fetch APIs">

  <title>Streams Audio Playback Examples</title>
  <!-- Global site tag (gtag.js) - Google Analytics -->
  <script async src="https://www.googletagmanager.com/gtag/js?id=UA-50610215-8"></script>
  <script>
    window.dataLayer = window.dataLayer || [];
    function gtag(){dataLayer.push(arguments);}
    gtag('js', new Date());

    gtag('config', 'UA-50610215-8');
  </script>
  <script defer src="main.js"></script>                        <!--  I made this change -->
  <link rel="stylesheet" href="../src/css/app.css">       <!--  I made this change -->
</head>
<body>
  <header>
    <h1>Streams Audio Playback Examples</h1>
    <p>Low latency playback of audio chunks using the Fetch, Streams, and Web Audio APIs.</p>
  </header>
  <main>
  </main>
  <footer>
    <a href="https://github.com/AnthumChris/fetch-stream-audio">View on GitHub</a>
  </footer>
</body>
</html>

This is the error I'm getting in chrome console when running "yarn start"

image

Any help would be greatly appreciated.

Set Position / Scrubbing / Seeking

How can I set the start position within the stream? For example, a user might want to click/seek to a given point in the streamed file and I want to set the player to that position to resume playback from that point (without the waiting for the whole file to load).

Reduce AudioBuffer Memory Usage

@teropa made a great recommendation to add backpressure to the decoder. This would prevent excessive memory use by limiting the number of unplayed, decoded AudioBuffers. One suggestion is to control the amount of decoded audio based on time (seconds), because DecodedAudioPlaybackBuffer grows audio buffers in size exponentially, creating lots of small buffers at first to provide low latency playback and then growing the sizes to create fewer, larger AudioBuffers, which reduces CPU and audio glitches.

Prevent opus_chunkdecoder_enqueue() Segmentation Fault

Segfault could occur if too many bytes are enqueued before they are decoded and removed from decoder->buffer. Buffer overflow would occur in decoder->buffer.

64k buffer should suffice because Opus Ogg "pages are a maximum of just under 64kB..." (https://xiph.org/ogg/doc/oggstream.html). However, calling program must enqueue chunks sizes that divide evenly into 64k (i.e. 4 enqueues of 16k, 8 enqueues of 8k, etc). Thus, 2 enqueue calls of 33k would yield 66k in the buffer, which should not be allowed because 66k > 64k.

Add Opus decoding support

Chunk-based Opus file decoding should be supported. Emscripten can be used to build WebAssembly modules for decoding the data. The opusfile decoder library could be used if it supports chunk-based byte array decoding. Initial research shows that a complete file may be needed for OggOpusFile objects. Otherwise, libogg may be needed to interface directly to parse/assemble the chunks into decodable packets.

The preference is to use WebAssembly for all decoding functions and limit use of JS to parse Ogg file structure. mohayonao/libogg.js could be used if JS is needed.

Safari 11.1 does not execute "onended" for all AudioNodes created

The "onended" callback should fire after an AudioNode completes playing. This works in other browsers and is used for calculating when AudioBuffers play and updating the UI. This was not tested on Safari < 11.1.

The "onended" callbacks don't execute, and the UI is inaccurate after playback completes (AudioBuffer Played should say 51):

screen shot 2018-03-21 at 3 20 41 pm

Can't decode stream of ogg opus formatted encoded bytes

my program is getting stuck on the third call to the decode function and I don't understand why. I'm not using a file, but rather a stream of opus encoded PCM bytes that I format with ogg library. Here is what I have:

This is one packet being passed into the decode function:

79,103,103,83,0,2,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,111,75,149,162,1,143,252,83,140,10,53,54,15,255,22,5,134,58,120,205,49,138,53,47,147,89,211,6,124,209,74,125,139,252,55,153,104,38,75,55,49,243,131,133,10,89,17,151,152,88,149,215,133,1,120,138,16,99,227,80,29,189,40,53,9,0,214,63,216,27,170,57,239,190,140,211,42,124,218,157,214,104,114,45,15,7,136,94,243,179,184,119,200,234,10,70,234,238,7,145,236,231,234,102,79,147,167,95,129,67,206,25,176,219,4,236,255,69,167,253,2,123,223,35,250,109,188,237,152,159,138,2,30,59,169,245,194,213,74,89,192,147,111,245,95,2,191,48,197

This is my index.js:

import { OpusStreamDecoder } from 'opus-stream-decoder';

const decoder = new OpusStreamDecoder({ onDecode });

function onDecode({left, right, samplesDecoded, sampleRate}) {
  console.log(`Decoded ${samplesDecoded} samples`);
  // play back the left/right audio, write to a file, etc
}

function buf2hex(buffer) { // buffer is an ArrayBuffer
  return Array.prototype.map.call(new Uint8Array(buffer), x => ('00' + x.toString(16)).slice(-2)).join(' ');
}

function main() {
  var mySocketAudio = new WebSocket(
    'ws://random.webaddress:8080', 'audio-only')
  
  mySocketAudio.binaryType = 'arraybuffer'
  
  mySocketAudio.onerror = function () {
    alert("failed to connect")
  };

  mySocketAudio.addEventListener('message', async function(event) {
    // packet length
    // console.log(new Uint8Array(event.data).byteLength)

    // bytes in dec
    //console.log(Array.apply([], new Uint8Array(event.data)).join(","));

    //await decoder.ready;
    //decoder.decode(new Uint8Array(event.data));
    //await decoder.ready
    //decoder.free()

    // bytes in hex
    // var buffer = new Buffer.from(new Uint8Array(event.data));
    // console.log(buf2hex(buffer))

    try {
      decoder.ready.then(_ => decoder.decode(new Uint8Array(event.data)));
    } catch (e) {
      decoder.ready.then(_ => decoder.free());
    }

    // free up the decoder's memory in WebAssembly (also resets decoder for reuse)
    decoder.ready.then(_ => decoder.free());
  })
}

main()

Also if I don't call decoder.free() on every single perceived frame then onDecode never gets called. When calling free() as shown, samplesDecoded prints out -131.

Add WebM Support

audio/webm codecs=opus files should be supported. WebM seems to be the preferred web web container moving forward and is also supported by Media Source Extensions (MSE) which is another mechanism for playing audio quickly. WebM support would also allow direct comparisons of playback immediacy between MSE and the Web Audio API.

A WebAssembly or JS module would be needed to extract Opus packets from the container file. The current opus-stream-decoder WASM cannot decode on a Opus packet-only basis

I don't know the WebM/Matroska container specs and will need to spend time reading those.

Create Reusable Packaged Module

This repo originally started as a proof-of-concept and could evolve into a formally packaged module could better serve the web community. The lessons and benefits learned by this POC could be used by other devs on their websites. A beginner also commented that the code was hard to follow because of inexperience building with NodeJS. Therefore, ease-of-use or drop-in implementation is also a priority. Identifiable goals could be:

  1. Feedback & participation from the community
  2. Opus audio playback in Safari (WASM-supported versions)
  3. Fast audio playback for low bandwidth or mobile users (56.6k and < 3G)
  4. Encouraging Opus over WAV in websites
  5. Encouraging Opus over MP3 and older lossy formats
  6. Simple API (src, play(), pause()) with advanced buffer options
  7. Fast playback of static Ogg Opus files
  8. Fast playback of live Ogg Opus streams (e.g. Icecast)

Stream audio output from PC

Hi,

This project looks great. Forgive me if this is a basic question.

I want to enable streaming the audio output from my computer (Mac). Latency isn't important.

WebRTC doesn't seem to cut it, as the quality isn't great.

Could I "stream" my audio to an opus/wav file? IE the file would continually be written to

Do you have a suggestion on how it can be done in fetch-stream-audio?

Thanks,

Failed when size too large

Excuse me,
I'm try too load a big wav music,
but some problem happened like noise and stop play .
And I try the mp3 format but it doesn't work.

Provide WAV file served with Access-Control-Allow-Origin * for testing

Searched for a WAV file that is served with Access-Control-Allow-Origin:* on the Web for testing and have not yet been able to find such a resource.

Several .opus files are listed at the repository main page, yet no WAV files.

Provide WAV file served with Access-Control-Allow-Origin * for testing.

Am able to supply the WAV files if you are able to upload them to your server for public usage.

noise when playing 24 bit wav files

Thanks a lot for putting this together. It's been really helpful. However, I have been running into an issue when trying to play 24 bit wav files.
I have been trying to use the wav decoder as it is provided within this repo and I tried to add a wav header to each chunk and tried to decode via audioContext.decodeAudioData. However, in both cases 24 bit files emit a very unpleasant noise. Yet, the first chunk seems to play correctly. I was suspecting that this might be related to the buffer size used to split up the chunks, however I haven't been successful in finding a value that would fix play back. Might this be related to the usage of Uint8Array to handle the audio stream data (yet, if this would be the case, the first chunk should show similar behaviour, which it doesn't)?

Do you possibly have any other idea what might be the issue here?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.