Git Product home page Git Product logo

Comments (21)

N-Wouda avatar N-Wouda commented on June 25, 2024

This is intentional; that's how Vidal designed it. Doing repair (and adding the result if it is feasible) has a slight benefit according to this background paper (e.g. Table 4).

from euro-neurips-2022.

N-Wouda avatar N-Wouda commented on June 25, 2024

See also here for the baseline implementation doing the same thing.

from euro-neurips-2022.

leonlan avatar leonlan commented on June 25, 2024

This is intentional; that's how Vidal designed it. Doing repair (and adding the result if it is feasible) has a slight benefit according to this background paper (e.g. Table 4).

I see, Vidal even explicitly mentions it. I still don't understand the added benefit of keeping the infeasible individual in the population when the individual is repaired to feasibility. Diversity reasons?

from euro-neurips-2022.

N-Wouda avatar N-Wouda commented on June 25, 2024

My understanding is that such a feas/infeas pair is very near the edge of the feasible set, so they're very close to where 'best' solutions are likely to be found. Keeping both might be good to later generate additional solutions near the feasibility boundary.

@rijalarpan what do you think about this?

from euro-neurips-2022.

N-Wouda avatar N-Wouda commented on June 25, 2024

@leonlan @rijalarpan what do you think of this? I feel like we should probably keep this as-is.

from euro-neurips-2022.

leonlan avatar leonlan commented on June 25, 2024

I'll close this issue. It might still be relevant because we use a different population management strategy, but I'll bring it up again if & when relevant.

from euro-neurips-2022.

rijalarpan avatar rijalarpan commented on June 25, 2024

My understanding is that such a feas/infeas pair is very near the edge of the feasible set, so they're very close to where 'best' solutions are likely to be found. Keeping both might be good to later generate additional solutions near the feasibility boundary.

@rijalarpan what do you think about this?

Repairing and not disregarding infeasible solutions immediately is the idea it seems. But, the code only adds feasible solution to the population. Infeasible solutions are added in a separate list?

from euro-neurips-2022.

N-Wouda avatar N-Wouda commented on June 25, 2024

Repairing and not disregarding infeasible solutions immediately is the idea it seems. But, the code only adds feasible solution to the population. Infeasible solutions are added in a separate list?

Both the infeasible and the repaired feasible individuals are added to the population, in the same list. We're contemplating separating the lists again in #55 (the baseline has separate feas/infeas subpopulations).

from euro-neurips-2022.

rijalarpan avatar rijalarpan commented on June 25, 2024

A separate set of population may be a prudent choice. Because the penalty for capacity and time warp is arbitary, infeasible solutions and feasible are not directly comparable. So, a single list where infeasible can drive out feasible ones can lead to situations where there are no feasible solutions in the population.

from euro-neurips-2022.

N-Wouda avatar N-Wouda commented on June 25, 2024

Why are we reopening this?

from euro-neurips-2022.

leonlan avatar leonlan commented on June 25, 2024

Why are we reopening this?

The discussion is on-going, no? I thought I closed it too early.

from euro-neurips-2022.

N-Wouda avatar N-Wouda commented on June 25, 2024

Why are we reopening this?

The discussion is on-going, no? I thought I closed it too early.

Yeah, fair!

from euro-neurips-2022.

N-Wouda avatar N-Wouda commented on June 25, 2024

@rijalarpan

A separate set of population may be a prudent choice. Because the penalty for capacity and time warp is arbitary, infeasible solutions and feasible are not directly comparable. So, a single list where infeasible can drive out feasible ones can lead to situations where there are no feasible solutions in the population.

After determining fitness, the worst fitness individuals are removed from the solution during the survivor selection phase. The relevant part of the code for deciding the fitness of our solutions is the following snippet (slightly abbreviated):

    // Ranking the individuals based on their diversity contribution (decreasing
    // order of broken pairs distance)
    std::vector<std::pair<double, size_t>> ranking;
    for (size_t idx = 0; idx != population.size(); idx++)
    {
        auto const dist = population[idx]->avgBrokenPairsDistanceClosest();
        ranking.emplace_back(dist, idx);
    }

    std::sort(ranking.begin(), ranking.end(), std::greater<>());

    auto const popSize = static_cast<double>(population.size());

    for (size_t idx = 0; idx != population.size(); idx++)
    {
        // Ranking the individuals based on the diversity rank and diversity
        // measure from 0 to 1
        double const divRank = idx / (popSize - 1);
        double const fitRank = ranking[idx].second / (popSize - 1);

        // Elite individuals cannot be smaller than population size
        if (population.size() <= params.config.nbElite)
            fitness[ranking[idx].second] = fitRank;
        else if (params.config.diversityWeight > 0)
            fitness[ranking[idx].second]
                = fitRank + params.config.diversityWeight * divRank;
        else
            fitness[ranking[idx].second]
                = fitRank + (1.0 - params.config.nbElite / popSize) * divRank;
    }

The population is sorted by increasing objective values (best solutions first). This code attempts to somehow combine that information via index position (lower = better) with a measure of diversity (the average broken pairs distance). I suspect we can do better if we normalise the objective values somehow, instead of just using the index. And we might need to play with the settings available here, because we have not yet really done so.

Or we should have a look at how the literature determines fitness in general. There's probably a lot out there as well :-).

from euro-neurips-2022.

rijalarpan avatar rijalarpan commented on June 25, 2024

Good point. This is quite simple, the distance between the solutions could be normalized with the solution quality. Maintaining the elite members incrementally makes sense, but I cannot fully understand multiplying with "divrank" in the second and third if conditions.

There is absolutely tons for fitness evaluation but to determine which one is relevant may require loads of experimentation. Leaving as-is may not be a bad idea. However, you can also use this method, with some modification, for the idea of simulated annealing to maintain inferior solutions for diversity. A worse solutions is maintained in the elite population probabilistically.

from euro-neurips-2022.

leonlan avatar leonlan commented on June 25, 2024

It seems a bit wasteful to repair infeasible solutions of low quality. I'll check on this and report back when I have the results.

from euro-neurips-2022.

N-Wouda avatar N-Wouda commented on June 25, 2024

Agreed. If we can somehow determine quality before attempting to improve the solution further, we could save quite a bit of work before committing to another (perhaps useless) LS run. But how do we determine low quality? :-)

from euro-neurips-2022.

leonlan avatar leonlan commented on June 25, 2024

Agreed. If we can somehow determine quality before attempting to improve the solution further, we could save quite a bit of work before committing to another (perhaps useless) LS run. But how do we determine low quality? :-)

I think when the (infeasible) solution has cost lower than 1) the best feasible solution or 2) the average feasible solutions. Something in that direction. 1) seems most promising to me: why bother repairing infeasible solutions that are not even better than what we have already?

from euro-neurips-2022.

N-Wouda avatar N-Wouda commented on June 25, 2024

Why bother repairing infeasible solutions that are not even better than what we have already?

Maybe if it has some structure that's really promising and could be exploited by a crossover with another individual in a later iteration? Not sure, just thinking. But it's not immediately obvious to me how we should decide that another go at this is definitely not worth it.

I think when the (infeasible) solution has cost lower than 1) the best feasible solution or 2) the average feasible solutions.

That makes sense to me, so we might as well try these. The benchmark can decide for us whether this is an improvement :-).

from euro-neurips-2022.

leonlan avatar leonlan commented on June 25, 2024

Initial experiments show a 20% reduction of runtimes with no loss of quality if we repair only infeasible < current feasible best. I'll throw it on the cluster to benchmark and report back tomorrow 😎

from euro-neurips-2022.

leonlan avatar leonlan commented on June 25, 2024

Maybe if it has some structure that's really promising and could be exploited by a crossover with another individual in a later iteration? Not sure, just thinking. But it's not immediately obvious to me how we should decide that another go at this is definitely not worth it.

I'm also speculating here:

The infeasible solutions are stored during education regardless of whether they are repaired or not (the reason why I opened this issue, haha). So it's still possible to exploit this structure in the next iterations.

But suppose that we repair the "bad" infeasible solution to feasibility. The resulting solution is likely to be not of great quality and so the substructures are not great either. This can help us create more diverse solutions, but the population management should already be taking care of that.

So to make the most effective use of repair, one should derive them from high quality. But this can also dig us deeper in the locally optimal solution...

I'll just wait for the benchmark results 😆

from euro-neurips-2022.

leonlan avatar leonlan commented on June 25, 2024

Nope. Faster by 15-ish% but not better. I'll close the issue.

from euro-neurips-2022.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.