Comments (14)
The orange line below shows the duration per trial for a single worker, it seems that time per trial is independent of number of trials completed:
from optuna.
Thank you for reporting the issue and sharing your code. Given the complexity of the provided code, we are unable to execute it on our end. Could you perhaps provide a minimalistic version of the code that only focuses on and can reproduce this specific issue?
from optuna.
Thanks for your reply. I found out what the issue was, If you load a study using "optuna.study.load_study" the settings for the study are not loaded and they revert to default. Which in this case meant the sampler was changing from NSGAIII to the BaseSampler which then has increasing suggestion time dependant on the number of previous samples. To get around this "optuna.study.create_study" with" load_if_exists = True" or "optuna.study.load_study" can be used however all study settings must be replicated.
from optuna.
As explained in https://optuna.readthedocs.io/en/stable/reference/samplers/index.html, evaluating the objective function takes more time on the number of trials for almost all samplers. So I'm not sure this is a bug.
from optuna.
Thanks for your reply, I am using the NSGAIII sampler which is not shown in that table but I assume it would have the same relationship as NSGAII. NSGAII time per trial should be independent of the number of trials completed. As mentioned above, if I use a single worker I do not get the same issue.
from optuna.
Thanks. Right, but in _collect_parent_population
of NSGAIII has for loop over trials, so I'm not sure this sampler is completely independent of the number of finished trials. Sorry I cannot answer clearly because I'm not familiar with this sampler...
from optuna.
Thanks! Could you share the minimal reproducible code with us?
from optuna.
I have sent an invite to collaborate to you.
from optuna.
Hi thank you for inviting me to your repo, but I believe the core-dev team also needs to access the repo if they investigate the issue. So I would appreciate it if you could create a public repo.
from optuna.
ok, I have made it public--> https://github.com/jt269/Optuna_share
I am currently testing with a single worker created through a subprocess and that also seems to have the same issue.
from optuna.
Hello, I have uploaded a minimalistic version. Thanks.
from optuna.
Hello, did you manage to reproduce this issue? Thanks
from optuna.
Sorry for my late response and thank you for sharing the minimum reproducible code.
I tried running your code, but unfortunately I couldn't reproduce the slowdown in my environment.
Could you share your results for the minimum reproducible code?
from optuna.
Thank you for sharing. To restore the Study state with load_study
, you need to specify NSGAIIISampler
in the sampler argument.
If the problem is resolved, I would like to close the issue, what do you think?
from optuna.
Related Issues (20)
- Implement Delta Lake as storage backend HOT 1
- GP Sampler throw `ValueError: Fewer non-zero entries in p than size` HOT 4
- optuna create-study --storage $STORAGE_URL raises RuntimeError: The runtime optuna version 3.6.0 is no longer compatible with the table schema (set up by optuna 3.6.0). Please try updating optuna to the latest version by `$ pip install -U optuna`. HOT 7
- Can't retrieve lightgbm model's best_iteration after Optuna optimization HOT 7
- Returning `float("inf")` failed with MOTPE HOT 1
- Support multi-objective CMA-ES (MOCMAES) sampling algorithm HOT 2
- Optuna Hyperband Algorithm Not Following Expected Model Training Scheme.
- Optuna Hyperband Algorithm Not Following Expected Model Training Scheme
- (sqlite3.DatabaseError) database disk image is malformed HOT 7
- Is there any plan for Maximal Update Parametrization support ?
- Hyper-parameter optimization of YOLOv7 model
- Hyper-parameter optimization of YOLOv7 model
- Local Minima Skip HOT 8
- Enhance `plot_parallel_coordinate()` by eliminating redundant for-loops
- Use CPU-only PyTorch wheels on GitHub Actions
- from optuna.integration.xgboost import XGBoostPruningCallback error ModuleNotFoundError: No module named 'optuna_integration' HOT 3
- Inconsistent Dynamic Parameter Handling in Optuna Ask-Tell Interface with Conditional Trials
- Suggesting same parameters in several trials with discrete parameter grid HOT 1
- How to optimize the parameters to make the RMSE fitting value close to 0 when predicting multiple output items through multiple input items?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from optuna.