Git Product home page Git Product logo

Comments (18)

ybayle avatar ybayle commented on August 30, 2024 1

Hi, I would like to propose and add a new audio deformer to muda that I need for my PhD thesis. I need to modify the phase of frequencies in songs to produce new audio signals. These raw audio signals could then be used as input of a neural network. I want to assess the impact of such data augmentation on the performances of a neural network and to study the internal learning of neurons.
I want to guarantee reproducibility of my algorithm and to enhance accordingly muda with this phase-based data augmentation.
Before starting to write a lot of code, I would like here to discuss more thoroughly on how to implement nicely this functionality in muda.
The ground truths (annotations) part should be straightforward as it won't time-stretch nor pitch-shift the signal.
I already have some python code working and the algorithm is quite simple:
Signal -> FFT -> phase modification -> IFFT -> Signal'
I am wondering how many parameters' input to interface with the user (and how many to hide). Here are the parameters that could be considered:

  • target_phase: can be an int for applying the same phase to all frequencies or an array for different phases.
  • bypass_indexes: an array containing the frames or timestamp on which we should not apply the treatment.

from muda.

ybayle avatar ybayle commented on August 30, 2024 1

Ok, thanks for the reply. I'll work on that and make a pull request once I validated some sound examples and produced the corresponding test functions.

from muda.

bmcfee avatar bmcfee commented on August 30, 2024

Note: fix timing vs group delay with convolved impulse response

from muda.

cyrta avatar cyrta commented on August 30, 2024

You should use not only
sox or rubberband but maybe vst plugins
for dynamic compression and filter for sure.
use https://github.com/teragonaudio/MrsWatson to script that easily,
it is a batch cli program to launch plugins with a given parameters.

ps. nice project, I have some scripts scripts to do that, but python library would be idea.
Were you inspired by ??
http://www.eecs.qmul.ac.uk/~ewerts/publications/2013_MauchEwert_AudioDegradationToolbox_ISMIR.pdf
http://code.soundsoftware.ac.uk/projects/
audio-degradation-toolbox

I will fork and put some changes then ask for merge

from muda.

bmcfee avatar bmcfee commented on August 30, 2024

You should use not only
sox or rubberband but maybe vst plugins
for dynamic compression and filter for sure.
use https://github.com/teragonaudio/MrsWatson to script that easily,
it is a batch cli program to launch plugins with a given parameters.

I'd rather not use any command-line tools, but rather library calls. Python bindings weren't quite there at the time I needed this to work, so the cmdline stuff was hacked in. I'd also prefer to avoid proprietary (ie, non-free software) dependencies. But otherwise: yeah, it'd be great to have a general audio effects binding! Do you think that's possible?

Were you inspired by ??

Yup! The details are in the muda paper, which (I hope!) explains what the difference between muda and adt is, and why we didn't simply fork adt.

I will fork and put some changes then ask for merge

Great! I'm also planning to do a bit more development on this and polish it into a proper python library with tests and documentation, hopefully before the end of october.

from muda.

cyrta avatar cyrta commented on August 30, 2024

Hi, thanks for link to the paper, clear now.

commandline in python must be avoided, sure.
There are many libraries in python or with python binding to use in open source.
However, most of the audio signal processing by sound studios is done using VST,
and most of commonly used presets are there stored or on internet,
I would be nice to have possibility to use e.g. reverb plugins.
Even there are quite number of open source plugins like freeverb.

I have some bash scripts that use mrswatson and proprietary plugins.
mrswatson is very good vst host, and its already available.
I do not know a good python host for vst, and writing one is too time consuming.
maybe it would be nice to change it to library and write simple python bindings, but it takes time also.
and it's better to make more signal degretation results than to spend time on keep the code super clean.

I also plan to do much of the work at the end of October.
I am going to update on my progress then.

from muda.

ejhumphrey avatar ejhumphrey commented on August 30, 2024

"time clip" is duration?

from muda.

ejhumphrey avatar ejhumphrey commented on August 30, 2024

roger on the "rather not use any command-line tools" ... I'd be keen to sync on this in a side-bar? depending on the conversation, we can summarize for posterity here or in a separate issue / proposal if need be.

from muda.

bmcfee avatar bmcfee commented on August 30, 2024

"time clip" is duration?

offset + duration, yeah. Think of randomly slicing the data and getting time-aligned chunks out. This is usually done in sampling / training pipelines, but it could be considered an "augmentation" as well.

roger on the "rather not use any command-line tools" ... I'd be keen to sync on this in a side-bar? depending on the conversation, we can summarize for posterity here or in a separate issue / proposal if need be.

what all did you have in mind?

from muda.

ejhumphrey avatar ejhumphrey commented on August 30, 2024

I don't share the aversion to leveraging command-line interfaces under the hood if it provides functionality we can't otherwise get (easily) through native libraries / interfaces. I agree that proprietary hard dependencies are no-go's, but I quite like the idea of making the framework as versatile as possible, if it means that a user might have to configure tools separately if they really want to harness muda.

For example, with time-stretching, we could provide different algorithms / backends for how this gets accomplished. Rubberband is fine, but what if I want to use dirac, elastique, or some other thing that doesn't / won't have a python implementation.

from muda.

bmcfee avatar bmcfee commented on August 30, 2024

but I quite like the idea of making the framework as versatile as possible

That's why you can extend the BaseDeformer object. 😁

Seriously though, cmdline dependencies are a total pain for maintainability. I'd have to check, but I'm pretty sure that 100% of the error reports I've received on muda have come down to broken cmdline dependencies with rubberband -- and that's a well-behaved and maintained package.

For example, with time-stretching, we could provide different algorithms / backends for how this gets accomplished.

This sounds like bloat/feature creep to me. IMO, the current stretch/shift stuff is good enough for government work*, and our efforts are better spent broadening the types of available deformations, rather than adding six variations of a thing we already have.

*downstream feature extraction

from muda.

bmcfee avatar bmcfee commented on August 30, 2024

Quick update: I have a first cut at chord simplification as part of a tag-encoding module here. It wouldn't be difficult to patch this into a muda deformer.

from muda.

bmcfee avatar bmcfee commented on August 30, 2024

That sounds interesting, and it should be pretty easy to implement since you don't have to do any annotation modification. The DRC deformer is probably the closest in structure to what you describe, though its parameters are obscured by a dictionary of preset.

Otherwise, the parameters you describe sound reasonable. The key thing is to push all of the parameters that the deformation function needs into the states generator, which you can see examples of in all of the other muda deformers. This ensures that deformations can be reconstructed exactly, and everything is properly logged in the output jams file.

from muda.

justinsalamon avatar justinsalamon commented on August 30, 2024

@bmcfee quick question - by Attenuation are you referring to changing the loudness of the signal?

Multi-loudness training (MLT) has been shown to be especially useful for far-field sound recognition (e.g. original PCEN paper), so it would be a great deformer to have for projects such as BirdVox and SONYC.

Perhaps a reasonable interface for this is for the user to provide min and max DBFS values, and then the deformer chooses a value uniformly in the provided interval and adjusts the gain of the input signal to match the selected value?

from muda.

bmcfee avatar bmcfee commented on August 30, 2024

by Attenuation are you referring to changing the loudness of the signal?

Yes, that's how ADT specified it (where this list originally came from). More generally, attenuation as a function of sub-bands (maybe notch filtering?), ala Sturm, might be useful as well.

from muda.

justinsalamon avatar justinsalamon commented on August 30, 2024

More generally, attenuation as a function of sub-bands (maybe notch filtering?), ala Sturm, might be useful as well.

That's more in the direction of EQ, no? Also a useful deformer, though I'd probably keep it separate from a global loudness deformer (color vs intensity).

from muda.

bmcfee avatar bmcfee commented on August 30, 2024

That's more in the direction of EQ, no?

Sure, but the former is a special case of the latter. Seems reasonable to me to keep the implementation unified.

from muda.

bmcfee avatar bmcfee commented on August 30, 2024

Side note: once bmcfee/pyrubberband#15 gets merged, it would be possible to simulate tape-speed wobble (as done by ADT) by piece-wise linear approximation. We'd have to reimplement the timing logic for annotations, but this shouldn't be too difficult.

from muda.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.