audiojs / audio Goto Github PK
View Code? Open in Web Editor NEWClass for high-level audio manipulations [NOT MAINTAINED]
License: MIT License
Class for high-level audio manipulations [NOT MAINTAINED]
License: MIT License
https://github.com/jkroso/parse-duration
Probably can be addressed in audio-buffer-from
In particular, asyncs and proxies may facilitate load and samples access, like https://github.com/sindresorhus/negative-array
//get last sample of left channel
audio[0][-1]
//load in single scope
let audio = await Audio.load(url)
Things the code needs to include. I don't want to include lots of things, I just want some pretty universally helpful/common data stored here. Keep in mind things like codecs, speaker playback, compression, etc... variables domain-specific to those can remain as options to their modules.
Iteration will be VERY helpful for writing multi-channel data.
For example, the use of generators would make it easy to write two dynamic channels at once:
const { PI, sin } = Math;
// Write sine waves
audio.write(function* (t, self) {
// Channel 1 (first iteration)
yield self.max * sin(2 * PI / 440 * t);
// Channel 2 (second iteration)
yield (self.max - 20) * sin(2 * PI / 1000 * 2);
});
With this, you could also use arrays to write static multi-channel data:
audio.write([12, 13]);
Then you could have single integers like 3
expand to [3, 3]
audio.write(3);
// Equivalent to
audio.write([3, 3]);
I am thinking of making a phone gap app to enclosure this library for music manipulation.Is it possible?Thanks.
Hi,
Npm repo has v1.2.0 listed https://www.npmjs.com/package/audio
Any plans to release v2.0 anytime soon?
Thanks
I'm encountering "this._parseArgs is not a function" when calling audio.play
Is this a known issue? I do realize things are in flux at the moment. I can look into fixing it if you're looking for contributors. This project seems very promising.
My code is simply
const Audio = require('audio');
Audio.load('./test.mp3', (err, audio) => {
//repeat slowed down fragment
//audio.remove(3, 4.1)
audio.save('edited-record.mp3')
})
and it causes the edited-record.mp3 to be just a text file with [object ArrayBuffer] as the contents. Node version is v10.15.3 on windows 10
Would be useful for transformative functions, as well as audio devices. Might be a simple Readable
wrapper.
Is this library support such things?
Since I'm trying to lay a pretty environment-agnostic foundation here for Audio
, maybe I should consider using the TypedArray
objects. That relies on the ECMAScript spec instead of Node's Buffer
.
Following audiojs/audio-buffer-list#5.
The current API approach is covered by a lot of similar components, it is destined to insignificant competition and questionable value. The main blocker and drawback is the core - audio-buffer-list component, which does not bring a lot of value, compared to just storing linked audio-buffers.
Alternately, audio could be focused on storing editing-in-process, rather than data wrapper with linear API, similar to XRay RGA.
I have been trying to think of an API for reading and writing multi-channel PCM data.
Hi!
Is there any way to get a blob from modified audio?
Audio.decode(new Blob([someBlob]), (err, audio) => {
const modified = audio.slice(0, 2.0);
// here I want to get a Blob (e.x new Blob([modified]) or smth like this)
});
The API draft is looking good, there is streaming issue.
audio.write(stream)
.audio.write(stream, offset)
method can take streams as input, technically it can detect the type of stream. But how shoud we stream out? audio.read(offset, duration)
returns buffer.
type
param so that audio.read(offset, duration, {type: 'stream'})
returns node stream, {type: 'function'}
returns reader and {type: 'pull'}
...{samplesPerFrame: 1024}
which can return reader function, which can be used by stream later?audio.pull
, audio.stream
methods for according streams.Property with raw channel data as
audio.channel[0]
audio.channel.L
audio.channel.SL
// but
audio.channels // audio.channel.length
Is there natural usability in that?
Hi @jamen!
Do I understand right that audio is opinionated container for audio-data with some handy API methods? Is there any sense to pivot it from audio-buffer so to provide a bunch of common methods for audio manipulations, e. g. all from audio-buffer-utils?
That would be handy to require audio
one time and have all the manip work done on some audio data, I could use that in wavearea as well.
Using audio-buffer and utils right now is a bit wearisome.
bitRate
propertyActually, it is useful to write single pulse values. When generating waveforms it is kind of annoying that I cannot write the pulses directly to the audio in the loop without initializing an array (ew slow).
Hi, I tried blob but it seems that blob is not supported.
this:
(new Audio(blob,{length:blob.size}).remove(1,2);
throws:
Value must be an array or buffer.
I'm editing some audio files using Node 12.4 and this package... But at the moment to get the stream using Audio.stream()
it shows me that isn't a function. Any solution?
Edited (sorry for the confusion). The title of this query is wrong.
I tried to use your library with react-create-app and the method "load" is simply not found. It does find the library however when required. When I go into the module folder and look in the audio folder it looks like most of the library is missing. I feel like I am missing an install step
We are facing presumably a problem of sync/async API here.
For example we create audio from remote url. What should happen if we instantly apply manipulations?
let audio = Audio(url);
audio.trim();
//is audio trimmed here? obviously not, as it is not loaded yet.
//should we plan it to be trimmed once it is loaded?
//or should we return error because we cannot trim not loaded data?
Planning reminds of jQuery pattern, where things get queued and applied in turn. The difference is that we arenβt (necessarily) bound to RT queue, therefore can do things at instant. Unless we expand Audio to a stream-of-chunks wrapper, which is a different story. Planning forces us to provide a callback for every manipulation method, which is bad for simple use and good for worker use.
jQuery/webworker classical async processing way.
let audio = Audio(url);
audio.trim().fadeIn(.5, (audio) => {
//audio is trimmed/faded here
});
//here audio is not ready yet, but trim/fade are queued
β enables webworker mode, freeing UI from heavy processing of large data
β does not break natural workflow, the code style is sync but running of it is async
β enables partially loaded data, like streams (potentially)
β makes workflow more difficult
β mb a tiny bit slower than sync way
β a bit unconventional API, considering possible promises style:
Audio(url)
.then(a => a.trim())
.then(a => a.fadeIn(.5))
.then(a => a.download())
This way is suggested in the zelgo article.
let audio = Audio(url);
audio.trim(); //throws error
audio.on('ready', () => {
audio.trim().fadeIn(.5);
});
β easy API
β blocking processing, esp. in case of large audio files
@jamen how do you think which one is better?
Turns out that linear fading does not sound great. Seems that fist - some output devices have built-in limiter/compressor, second - loudness perception is not linear.
Should we tune fade
so it not to be mathematically linear? audio-gain includes tangential mode of setting volume, should we do the same here? It thinks to me that having natural fade by default is preferable, and if one needs math fade one should use audio.process
or audio.fill
Should we cache audio to localStorage or so as well?
I'm currently working on a project to play a stream of audio back through the speakers. Can this be done using this project?
Would be nice if there was a little utility to just create streams right out off the bat.
Audio.stream({ ...options })
.pipe(through2.obj(function(audio, enc, callback) {
// ...
}));
When running the basic example I get a TypeError
.
Code:
const Audio = require('audio')
Audio.load('./December.mp3').then(audio =>
audio
.trim()
.normalize()
.fade(.5)
.fade(-.5)
.save('sample-edited.wav')
).catch(err => { console.log(err); });
Output:
this.buffer.each is not a function
at Audio.trim (/Users/β¦/node_modules/audio/src/manipulations.js:282:15)
at Audio.load.then.audio (/Users/β¦/index.js:23:6)
at <anonymous>
I tried using the npm version as well as npm i audiojs/audio
, with .wav and .mp3. Any ideas?
on npmjs I find https://www.npmjs.com/package/audio listing 1.2.0 without any documentation.
on github, I find releases up to v2.0.0-1, but even those are from August 2016.
so I wonder if this project is still active and maintained...?
in fact, I'm looking for a way to change the speed of a PCM stream. my code looks like:
const lame = require('lame');
const request = require('request');
const Speaker = require('speaker');
const decoder = new lame.Decoder();
const speaker = new Speaker({
channels: 2,
bitDepth: 16,
sampleRate: 44100,
mode: lame.STEREO,
device: 'hw:1,0',
});
const req = request.get(url);
req
.pipe(decoder)
.pipe(speaker);
and I wonder if I can make use of some part of audio
to control the playback speed.
Possible cases
// string in options
audio.shift(100, 'ms')
// string args
audio.pad('2s')
// options param
audio.pad(2, {unit: 's'})
// predefine unit (bad, since different instances have different behavior)
audio = Audio(2, 's')
audio.pad(2)
Would that be demanded? Would that be better than audio.pad(audio.time(2))
?
During npm i audio
I've got an error
> [email protected] install /home/yevhenii/Documents/node/audio-howler/node_modules/speaker
> node-gyp rebuild
make: Entering directory '/home/yevhenii/Documents/node/audio-howler/node_modules/speaker/build'
CC(target) Release/obj.target/output/deps/mpg123/src/output/alsa.o
../deps/mpg123/src/output/alsa.c:19:28: fatal error: alsa/asoundlib.h: No such file or directory
compilation terminated.
deps/mpg123/output.target.mk:110: recipe for target 'Release/obj.target/output/deps/mpg123/src/output/alsa.o' failed
make: *** [Release/obj.target/output/deps/mpg123/src/output/alsa.o] Error 1
make: Leaving directory '/home/yevhenii/Documents/node/audio-howler/node_modules/speaker/build'
The purpose is to use audio
in browser environment in Vue.js project
Is there any sense in channel naming convention to shorten args?
audio.write(data, 'left') //vs audio.write(data, {channel: 0})
audio.read('right') //vs audio.read({channel: 1})
audio.shift(-100, 'right') //vs audio.shift(audio.time(100), {channel: 1})
audio.channel[0]
const audio = require('audio')
const newAudio = new audio()
newAudio.load('https://s3.amazonaws.com/philandrews/hardwell.mp3').then(audio => {
console.log(audio)
}, error => {
console.log(error)
})
Error: newAudio.load is not a function
What am I missing here?
I would like to rewrite documentation by hand. I don't really like how JSDoc does it.
Audio
.source
PCM format (blocks).The error appeared using Audio library
Audio.load(link).then(audio =>
audio
.slice(this.range[0], (this.range[1] - this.range[0]), {copy: false})
.save('sample-edited.mp3')
)
Uncaught (in promise) TypeError: saveAs is not a function
at eval (browser.js?f831:42)
at new Promise (<anonymous>)
at save (browser.js?f831:40)
at Audio.save (core.js?a391:285)
at eval (HelloWorld.vue?18db:122)
Audio was installed using npm i -S audiojs/audio
Cover all TODOs in readme and fix all skipped tests
Must implement async iterable interface to stream samples.
.slice
must return ref to initial buffer, not immutable clone.
.trim
and other modifiers must create diffs on the initial buffer, not change samples.
Great library!
Not sure if this is the right place for this but Audio.splice function would be fantastic to have access to. There is no other library for Node that has a splice functions. This whole library is novel against anything else available. Splicing would be big.
Is there a possible way to hack this using .trim
method?
Writing a single pulse value is rare, and it can still be done easily via the array.
Current API comprises various concepts and various contexts, mixing them all up does not work well.
Let's try to analyze and clean them up, figure out the core value of the package, distinguished from just a heap of assorted audio aspects. On the way taking notes/ideas.
There are the following apparent contexts.
Originally these concerns are handled each with separate node in audio-processing graph.
But they can be reclassified into:
β With different flavors (type of data storage, time units convention, naming, stack of ops vs direct manipulations)
Also, it's worth correlating with MDN Audio - that includes own opinionated subset of operations.
Also, alternative audio modules (wad, aural, howler, ciseaux etc.) each has own subset of operations.
Consider possible concepts.
! one possible value is to just provide standard Audio container for node.
Processing and creating audio fast is important, would be nice to do some benchmarks to test this.
Turns out some players, like winapm, can't reproduce 1-sample wav.
Hence we may decide on whether we need force min-length file for save
method.
Cover cases from https://andremichelle.github.io/neutrons/limiter.html
Similar to Image
:
Audio('./x.wav', (err, a) => {})
Audio(blob).on('load', a => {})
Audio(file).then(a => {})
.decode
, .load
methods.Audio(url).then(a => {})
: audio is thenable
.Otherwise it is not clear the natural way of creating blobs: Audio.load(blob)
,Audio.decode(blob)
or Audio.from(blob)
?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.