Git Product home page Git Product logo

Comments (4)

vaenyr avatar vaenyr commented on August 28, 2024 1

Hi, we've update the code, the Google Drive and the README with things needed to train accuracy predictors on NB101, DARTS (very basic support) and NB-ASR (as extra, see: https://github.com/SamsungLabs/nb-asr) search spaces. We also added support for zero-cost metrics (https://github.com/SamsungLabs/zero-cost-nas).

Feel free to reopen the issue if something is unclear/missing/failing! :)

from eagle.

vaenyr avatar vaenyr commented on August 28, 2024

Hi Adithya,

Sorry for the delay in releasing the code - we have everything ready in our internal repo but unfortunately it takes some time to make those things public. I'll let you know through this issue when this happens.

However, just to make it clear, the support for DARTS and NB1 search spaces is limited to accuracy prediction only. The reason behind it is that we simply did not have enough resources to run extensive benchmarking (like for NB2) for larger search spaces, so there's no latency dataset for the other two. If you are interested in measuring and predicting latency, e.g. for NB1, you'd need to have access to the relevant HW and implement necessary functions to create models from model identifiers (like arch vectors). I'm here to help if you decide to do something like that and run into any problems.

Thanks for you interest in our work! I hope you find it useful.

from eagle.

Adithya-MN avatar Adithya-MN commented on August 28, 2024

Thanks for the response - I really appreciate it! I completely understand the issues with simulating the whole NASBench-101/DARTS type spaces - I was particularly curious since I did see that you had simulated NASBench 101 for the Best of Both Worlds paper.

As for HW metrics for such spaces, what are your thoughts on performance predictions trained on a limited number of points? I did see NB-301 take one such approach (albeit, for accuracy) and just wanted to see your thoughts for the same

from eagle.

thomasccp avatar thomasccp commented on August 28, 2024

Hi Adithya,

Thank you for following our papers! Regarding performance prediction, the BRP-NAS paper (https://arxiv.org/pdf/2007.08668.pdf) has some relavent numbers in the NB-201 search space: Table 1 shows the latency predictor trained by 900 points and tested on 14k points; Table 2 shows the accuracy predictor trained by [50/100/200] points.

It will be more challenging in larger search spaces such as DARTS. That's why we proposed the binary relation prediction approach supplemented by iterative data selection. In this case, the predictor focuses on ranking of models rather than the absolute values of performance.

In the Best of both world paper (https://arxiv.org/pdf/2002.05022.pdf), we have a latency model (a latency lookup table of operations and a scheduler). It works well if you know the specific details of the target hardware and can develop an hardware-dependent latency model. It is a slight different approach from the GCN predictor which is purely trained by measured latency.

from eagle.

Related Issues (17)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.