Git Product home page Git Product logo

Comments (9)

Santosh-Gupta avatar Santosh-Gupta commented on May 22, 2024 2

Thanks, looking forward to it!

The reason why I am interested in doing it in Numpy is that Keras/Tensorflow isn't very great for sparse training. My use case is training 9 figures of embeddings. It eats up a lot of memory to have all those embeddings loaded into memory at the same time, and it's not necessary since only a small fraction is trained during each update step.

So I was thinking of using your library and altering it so that it saves the unused weights to disk, and only loads them when they are trained or about to be trained.

from numpy-ml.

ddbourgin avatar ddbourgin commented on May 22, 2024 1

So I was thinking of using your library and altering it so that it saves the unused weights to disk, and only loads them when they are trained or about to be trained.

That sounds like a good idea! If you end up implementing this, definitely consider submitting a PR :) I think this could be quite useful for a number of different model components, including the sparse evolutionary training layer (which currently uses dense matrices 😬).

In the meantime, you might look into the magnitude package (I haven't used it myself, but it seems potentially relevant).

from numpy-ml.

ddbourgin avatar ddbourgin commented on May 22, 2024

Don't be sorry - this question is very justified -- there is almost no usage documentation right now!

First, the bad news: at the moment, implementing word2vec will require a little bit of extra leg-work on your end. In particular, you'll need to implement either a negative sampler/noise contrastive estimation loss or a hierarchical softmax loss. For the latter, you could use the numpy_ml.preprocessing.nlp.HuffmanEncoder module.

Now, the good news: I'm actively working on writing an NCE loss object, and hope to push it ASAP. I'll also probably include a convenience Embedding layer to make embedding lookups a bit faster. I will update this thread when it has been pushed.

Ultimately, once these two components are in place, you should be able to write a relatively straightforward model. To see what a model object might look like, you can look at some examples in either the numpy_ml.neural_nets.modules or numpy_ml.neural_nets.models directories.

from numpy-ml.

ddbourgin avatar ddbourgin commented on May 22, 2024

Finally, one last caveat - if you're interested in training a non-toy word embedding model, I'd highly recommend using a library like keras, since it will make use of performance-optimized implementations for each model component. The code in this repo is meant to be clear and straightforward, but this often comes at the expense of efficiency!

from numpy-ml.

Santosh-Gupta avatar Santosh-Gupta commented on May 22, 2024

It looks like this is exactly what I was looking for. I'm not familiar with a lot of computer science terms, but it uses sqlite as the datastore, so I'm guessing it does what I am looking for.

Edit: reading the paper

"Magnitude queries return almost instantly and
are memory efficient. It uses lazy loading directly from disk, instead of having to load the entire model into memory"

Wow! Thanks for this recommendation!!!

from numpy-ml.

Santosh-Gupta avatar Santosh-Gupta commented on May 22, 2024

Looks like you can't use it for training, oh well

plasticityai/magnitude#32

from numpy-ml.

Santosh-Gupta avatar Santosh-Gupta commented on May 22, 2024

But looking at the code gives me a strong idea of how I could implement it in this library, by using SQLite database as the vector store, and just copying the values back and forth between that and the numpy embedding array.

from numpy-ml.

ddbourgin avatar ddbourgin commented on May 22, 2024

Heya @Santosh-Gupta - I've just pushed a preliminary version of an NCELoss and word2vec model here and here.

Unfortunately, I suspect that if you're going to use the models for any sizeable dataset you'll have to do some performance modifications first. Let me know if you decide to try it out / have any questions in the meantime!

from numpy-ml.

Santosh-Gupta avatar Santosh-Gupta commented on May 22, 2024

Sounds good!

from numpy-ml.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.