ntnu-ai-lab / evolp.jl Goto Github PK
View Code? Open in Web Editor NEWA playground for evolutionary computation in Julia
Home Page: https://ntnu-ai-lab.github.io/EvoLP.jl/
License: MIT License
A playground for evolutionary computation in Julia
Home Page: https://ntnu-ai-lab.github.io/EvoLP.jl/
License: MIT License
Right now the examples are just jl
files. Having them in notebook format allows people to see them directly in the GitHub repo.
We have created a fancy way to report the results of an algorithm. We should implement it in the built-in algorithms:
This includes renaming and adding parent selection and survivor selection operators (right now we only have parent selection).
The names of the generators don´t have any specific convention: we have rand_pop_int_perm
but also rand_particle_uniform
and rand_pop_uniform
so maybe coming up with a convention is a good idea.
Info about the return value of type Result
is not updated. Please update it accordingly:
Every base algorithm should have at least one variation using the new and fancy Logbook in it.
See for example the GA here.
To document, we're using the Documenter.jl
module, which takes the docstrings in the source to generate the documentation. I'll update here with a list of stuff we need to consider:
Result
information from algorithms (and move to Result
)The Result
type has optimizer as field 1 and optimum as field 2. Most of the times what we care for is the optimum, so that should be first :)
runtime
(both the method and the value!)Continuing #15 here.
An example:
Given a function evaluated in a closed interval, e.g., the Eggholder Function which is evaluated with values of
An individual that is close to
Not extremely urgent but we should address this.
A crowding and a niching "block" would be highly useful. How should we deal with that?
Provide a standard built-in notebook for fitness and another one for population. What should it include?
We add a new constructor with a different set of built-in functions.
These two functions are implemented and documented, but their documentation is not compiled.
Just update docs/src/man/testfunctions.md
So far, the built-in algorithms only calculate the statistics passed using the fitnesses (which is useful most of the cases). However, one may want to calculate stuff on other vectors, e.g. entropy (as a mean of monitoring diversity) considers probability of bit values and for that it needs the population vector.
There are several ways to fix this. These are, from the top of my head, some of the ideas:
Temporal solution, hack/patch: use two Logbooks, one for each data vector.
Long-term solution 1, moderately complex: modify the built-ins such that they receive a parameter that handles this situation.
Long-term solution 2, moderately complex: modify the logger so that it creates a mapping between a string X tuple, instead of string X callable.
Long-term solution 3, highly complex: add a callback module to the end of the built-in algorithms.
Edit 10.07.23: Now we add a new function that uses the new compute method in the rest of the Algorithms
10.07.23: We will move algorithms to a sub-module, and only provide blocks in a general form.
We need the tools and blocks. If you want to use algorithms, feel free; but one should not have to do that!
I want to add more benchmarks. Should we start with those in the book? Maybe we can extend to those in other optimisation suites.
So far, the return value of the built-in algorithms is a 2-tuple of the form (best, pop)
, which is normally what one would expect from an algorithm like this (see for example MATLAB's min
or max
function).
However, this could be reworked so that it returns a more concrete tuple or type with additional information:
I assume the best method for this is a new type that has immutable fields like these. Any other idea?
The two new functions, Egg Holder and Rana, have a Vector{T} where {T <: Real}
type constraint. We should implement the same for other functions, and test it.
test/testfunctions.jl
We have identified that it is possible to have an earlier version of Julia instead of 1.8 (while working on IDUN, actually). This will allow users to use the framework even when having earlier versions. We will use 1.7 as the lower target version.
Project.toml
We don't have a test suite for selectors. Come up with some tests and populate both the deprecated selection methods and the new selectors.
test/deprecated.jl
test/selection.jl
List here all tests:
Examples now reside in an examples
folder inside the main EvoLP folder. This should be moved to the documentation instead, in an Examples
subsection.
The new rand_pop_int_perm
works now with the assumption that you pass a set (or a vector with no repeated objects) such that the sampling without replacement generates a permutation.
We can make it behave as a combination by allowing sampling with replacement, and this way we have an additional generator in the same function.
By changing this, we will need to re-document it and refactor the example.
Right now, algorithms use the code almost verbatim—as presented in Kochenderfer's book. However, there a few improvements that can be made regarding function calls.
As an example, the old version of the GA algorithms had an additional step in which the function was evaluated at the end to find the optimum before reporting it. This would extract the index in the population. To return the fitness, the function was evaluated again on that individual, as the fitness was not being saved. This resulted in an extra call.
We solved it by using the findmin function, which allowed us to return both the minimum and its index. In this way we make 1 fewer call.
Is it worth it if it's just 1 fewer call? Yes, because maybe each evaluation takes a million seconds to complete. We want to keep it at minimum.
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix
on this issue.
I'll open a PR within a few hours, please be patient!
For some reason I never wrote down the return values of the built-in algorithms. I should do that.
Add the new functions to the test suite in test/testfunctions.jl
:
See #79 and include the relevant information in the documentation.
Let's meet to discus what should be prioritized when it comes to
performance and parallelization.
Making sure the steps are right to submit to the Julia package listing and such
Continuing the work on #60.
According to Invenia guidelines, we don't have any specific renamed/modified symbol names, nor function signatures, nor moved anything. We can just add a warning in the documentation.
Let's do that and then merge.
Originally planned to add the logos to the docs as well, but handling of images in Documenter is yet not very friendly.
A new README is necessary:
Making sure everything pertaining to documentation is ok:
Making sure things are a-OK in GitHub:
Mentioned before,
but wha about having something like MLDatasets but for
algorithms for EVOLP?
A bunch of the operators and their structure will be reworked for EvoLP 2.0.
We need to keep the old ones in a deprecated.jl
file, and provide the new bindings as well. Take a look at the following hierarchy:
The following operators need to be deprecated, replaced, and tested:
SelectionMethods
are disappearing! We need to deprecate them via depwarn
:
RankBasedSelectionGenerational
RouletteWheelSelectionGenerational
TournamentSelectionGenerational
TruncationSelectionGenerational
SelectionMethod
is now Selector
SelectionMethod
Selector
has now one abstract sub-type (so far):
ParentSelector
DemeSelector
* (planned, see the ontology)RankBasedSelectionSteady
is now RankBasedSelector
RouletteWheelSelectionSteady
is now RouletteWheelSelector
TournamentSelectionSteady
is now TournamentSelector
TruncationSelectionSteady
is now TruncationSelector
MutationMethod
should be Mutator
Mutator
should have three subtypes (so far):
ContinuousMutator
or maybe Real?BinaryMutator
PermutationMutator
GaussianMutator
as a subtype of ContinuousMutator
BitwiseMutator
as a subtype of BinaryMutator
SwapMutator
as a subtype of PermutationMutator
Insert
as a subtype of PermutationMutator
ScrambleMutator
as a subtype of PermutationMutator
InversionMutator
as a subtype of PermutationMutator
CrossoverMethod
should be Recombinator
Recombinator
should have 2 subtypes (so far)
NumericRecombinator
ContinuousRecombinator
PermutationRecombinator
InterpolationRecombinator
as a subtype of ContinuousRecombinator
SinglePointRecombinator
as a subtype of NumericRecombinator
TwoPointRecombinator
as a subtype of NumericRecombinator
UniformRecombinator
as a subtype of NumericRecombinator
OX1Recombinator
as a subtype of PermutationRecombinator
Add support for two new tested (and approved) real-valued functions:
So far we have only used continuous and binary domains. A good idea is to implement crossover and mutation operators that can work with permutation-based representations.
We might as well try to set up a benchmark and then continue with an example.
Right now the generators create the population by taking a set of parameters and working it out from scratch.
Another alternative would be to take an individual as a base and then generate the population based on that.
This is a nice enhancement but is not required immediately. Also, it would only make sense on discrete ones:
Take for example the oneplusone
signature:
oneplusone(logger::Logbook, f, ind, k_max, M)
Since logger
is being modified by the compute!
function inside, then oneplusone should have an exclamation mark in the name as well.
This would make the difference between oneplusone
and oneplusone!
easier to notice, as well.
oneplusone!
GA!
PSO!
This is a wrapper for a sequence of steps needed to comply with community standards as described here.
Issue and pull request templates would be set up later, when the need arises
Adding TagBot to keep up with the versioning is a great idea. Let's do that
Continuing the work on #60.
It is a good idea to provide numbers on what the improvements represent. I have noted the complexity reduction in the git log, but we can present a report by using BenchmarkTools.jl
.
So far, I have the following:
Calls for original logbook GA:
The optimised version (reimplemented and after emptying the logbook):
Doing the same for the rest of the algorithms seems like a good idea. We can include it in the changelog. (do we have changelog?)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.