Git Product home page Git Product logo

Comments (7)

bmcfee avatar bmcfee commented on August 30, 2024

I like this idea. One use case I've been running into lately is using multiple models with the same pump. It would be handy if a pump object could take an input data dictionary, and only compute the keys that are missing. This would be a bit simpler than relying on an external file-backed cache, but still cover a lot of scenarios.

from pumpp.

beasteers avatar beasteers commented on August 30, 2024

Something like this is along the lines of what I was thinking. Is this about what you want?

class Pump:
    def transform(self, audio_f=None, jam=None, y=None, sr=None, crop=False, 
                  data=None, refresh=False):
        if self.cache_dir and audio_f and not refresh:
            cache_id = hashlib.md5(audio_f.encode('utf-8')).hexdigest() # or however
            cache_file = os.path.join(self.cache_dir, cache_id + '.h5)
            data = crema.utils.load_h5(cache_file, data=data) # skip keys already in `data`
        else:
            data = dict() if data is None else data

        ...

        for operator in self.ops:
            if operator.name in data:
                continue 
            
            ...

        if self.cache_dir and audio_f:
            crema.utils.save_h5(cache_file, **data)
        return data
        

This would be a bit simpler than relying on an external file-backed cache

I'm not sure what you mean by this, do you mean always computing features on the fly?

I think there’s room for both. As long as you hold on to the data dict reference you’ll have the computed features and the next time you need that file you can load from hdf5.

This lets you train a model the same with or without running crema prepare.py.

from pumpp.

tomxi avatar tomxi commented on August 30, 2024

My 2c: I’m not sure what brian’s Scenario is for pump sharing, but I think I understand Bea’s position and proposal.

However I kinda have a different view on how to handle use cases that Bea mentioned.
Currently the crema training scripts are divided into the prepare.py step where audio and jams are pumped through pumps to become numpy.arrays, and the train step where these arrays are consumed by some ML model.
I welcome this divide, and in fact I think it’s not divided enough: The only places where the train.py step access info in the pump that built the nparrays are 1) to figure out the frame rate in time of the arrays and 2) to build the appropriate input layer for a specific downstream ML package.

I think while Bea asks for a combination of the file handling capabilities into pump to clean up prepare/train, I would actually call for a complete pump agnostic train.py script, and modify pump accordingly...

One other consideration is.... the less there is in pumpp the less can go wrong and less things to maintain?

Anyway, just my personal thoughts.....

from pumpp.

beasteers avatar beasteers commented on August 30, 2024

That's a fair position and I see where you are coming from. I'm just looking to reduce the boilerplate needed to setup a model and reuse as much as possible. Like ideally, I'd like to have a file that defines some config, get_pump(), construct_model(), and evaluate_model() and then crema could handle generic augment/prepare/sampling/training with room for overriding behavior, but idk. Maybe something like this would be better suited for crema. ($ crema.augment stretch=..., $ crema.prepare input=...

To your point about separating pumpp and training, what would be your plan for removing that dependency? I rather like pumpp's ability to generate it's own keras imports. And seeing as we use pumpp samplers for training, I don't know if it's realistic for them to be independent.

from pumpp.

bmcfee avatar bmcfee commented on August 30, 2024

Just had a chat with @tomxi , so I'll try to clarify my thoughts here.

  1. I think a simpler pattern would be to accept a data= parameter, which can be a dict (or None by default), and skip over computing fields already present in data. That's as far as I think pumpp should go with caching. If you have to compute new outputs, they get stored in data[somekey] and the entire object is returned just like it currently does.
  2. If you want a file-backed cache, this could be implemented more generally by making a class that implements the dict API, but can pull data from a file (eg h5) or write-through. This would be generally useful, I think, outside of pumpp, and wouldn't require pumpp to do any file io.

Does that make sense?

from pumpp.

beasteers avatar beasteers commented on August 30, 2024

Yeah that sounds good. And # 2 is an interesting idea.

from pumpp.

tomxi avatar tomxi commented on August 30, 2024

@beastrees
Yea actually I like the generate Keras input layer function as well. And I don’t see a clean way of decoupling pump and training.

I don’t think I want to propose any changes at the moment besides the ones you are already working on.

Cheers!

from pumpp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.