Comments (7)
I find something similar already on quali (same benchmark; different tab "Static" with runs of 6 October), where keeping the best solution does not seem to improve the solution quality. It's probably too soon to tell whether it makes it (much) worse, but there's no clear benefit either.
from euro-neurips-2022.
I ran some more experiments over the weekend to analyze performance. Instead of keeping best, I kept 0, 1 or 15 binary-tournament picked individuals and then added minPopSize random individuals on restart. These are the results:
nbIter/nbKeepOnRestart | 0 | 1 | 15 |
---|---|---|---|
5000 | 164337 | 164344 | 164350 |
7500 | 164344 | 164349 | 164358 |
10000 | 164345 | 164372 | 164379 |
12500 | 164347 | 164367 | 164368 |
Conclusion: just keep nothing, that should work best.
I also tried to mutate the best individual using some slack-induction string removal operators. But no success with that either.
I'm really surprised and confused why keeping parts of the old population leads to no improvement. I'm really confused why nothing works 🤷♂️. Maybe I just didn't hit the sweet spot.
from euro-neurips-2022.
So keeping more of the old population is consistently worse (from 0 -> 1 -> 15 is nowhere decreasing). Larger values for nbIter
seem to be worse as well, but the evidence isn't as clear cut. We should definitely investigate restarting fairly often when tuning (e.g., nbIter
should probably be in the range of [1K, 10K]
or so).
from euro-neurips-2022.
[1k, 10k]
seems like a good range to me.
We may also want to change our how nbIterNoImprovement
is updated. Right now, it's looking at the current best feasible solution. If that's not improved, the counter adds one. The original implementation looks at whether the global best solution is improved. I don't know if the latter is better, but if we do consider that one, we need to have a much larger range, something like [5k, 20k]
. Otherwise all restarted populations keep restarting without even getting close to a good solution.
from euro-neurips-2022.
The original implementation looks at whether the global best solution is improved. I don't know if the latter is better, but if we do consider that one, we need to have a much larger range, something like [5k, 20k].
I feel like that original implementation is fairly flawed, and probably not better than what we have now. Like, we observe that after a few thousand iterations the thing basically gets stuck, and never really gets close to a best solution from earlier restarts in that current run - it needs a restart and another go before possibly improving the objective again.
It's probably just better to quit after a little while without improvement, so as to explore more.
from euro-neurips-2022.
I think you are right. Thinking back again on why I changed it, there are a lot of instances where the algorithm only gets stuck after like 30K iterations. A restart on global improvement would then need at least be 30K, but that's way too much for other instances.
The current implementation is much more robust to this.
from euro-neurips-2022.
I'll close this issue because there's not enough time to try something else and nbKeepOnRestart=0
seems to work fine. Perhaps parameter tuning will discover a better value but who knows.
from euro-neurips-2022.
Related Issues (20)
- Impact of simulation-solution quality on rollout performance HOT 17
- Improve rollout dispatching criteria HOT 2
- Filter instance method unsafe? HOT 9
- How to structure codebase HOT 3
- Single static solver builder HOT 6
- Route minimization procedures HOT 15
- Configuration management
- Parent selection for crossover HOT 12
- Documentation HOT 12
- Rename rollout and parameters
- High variance in solution quality HOT 6
- Fitness comparison in binary tournament
- TODOs in code HOT 7
- Neighbourhood sizes HOT 15
- Determining minimum number of vehicles HOT 10
- Make sure everything's deterministic once we fix a seed HOT 21
- Slack-induced string removals as mutation operator HOT 1
- Solve epochs with low number of must_dispatch requests greedily HOT 17
- Postprocess after finishing LS HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from euro-neurips-2022.