Git Product home page Git Product logo

fs-private's People

Contributors

gfrances avatar jfermes avatar miquelramirez avatar nirlipo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

ab649964207

fs-private's Issues

Squash non-determinism

There seems to be some source of non-determinism in the C++ backend of the planner, which provokes some slight variation of results. This can be reproduced with the following command:

python3 preprocessor/runner.py --tag t4 --instance /home/gfrances/projects/code/fs-benchmarks/benchmarks/blocksworld-fn/instance_20_2.pddl --driver lazybfws --options "successor_generation=naive,bfws.rs=sim,evaluator_t=adaptive" --run

The json files generated by the frontend are identical, but number of expansions, etc. seem to be different.
The logs generated by the (release-mode) planner are also equivalent.

Small refactoring needed for the successor generation strategies

The current default successor generation strategy is the naive one, but we're performing a lot of pre-compilations to get a bunch of data structures ready for more sophisticated strategies which don't get used.
This should be cleanly separated, so that those structures are created only when necessary.
Check SmartActionManager::compute_whitelist to begin working on this.

ASP grounder fails on Maintenance

In Maintenance, the done predicate is incorrectly being identified as static.
This is because the only effect that modifies the extension of done is statically pruned.
It is an effect involving universal quantification and conditional effects:

(forall (?plane - plane) (when (at ?plane ?day ?airport) (done ?plane)))

The link_groundings method of the conditional effect class doesn't work as
expected (I think), and I suspect it has to do with the fact that the condition
of the effect contains a reference to a universally quantified variable.
Indeed for an effect such as the Maintenance effect
(forall (?plane - plane) (when (at ?plane ?day ?airport) (done ?plane))),
and taking into account that at is static, the grounding should leave as
with a number of grounded actions, e.g. work-at(d1, a2) for which
the universally quantified effect is expanded into |P| non-quantified effects
(P being the set of planes), some of which are pruned (because the effect condition
can be statically shown to be unapplicable), and some of which are left as non-conditional
effects (because the effect condition can be statically shown to be a tautology).
Just in case this helps...

ASP grounder fails on Visitall

I've been looking into it but haven't been able to locate the exact source of the problem.
The ASP grounder component fails when processing visitall instances. E.g. in the smallest instance:

python3 preprocessor/runner.py --gringo  --tag testing --instance ~/projects/code/downward-benchmarks/visitall-sat11-strips/problem12.pddl

The error is related to the grounding of (sub-)conditions, apparently some grounding is not properly processed.
Perhaps @miquelramirez could have a look at it?
Visitall seems to be the only IPC14 domain where this happens.

I'm attaching the output I get, just in case.

Merge iw_run and mv_iw_run back

iw_run and mv_iw_run could probably be merged back to reduce a lot of code duplication.
See some good notes by @miquelramirez on this here: #91 (comment). I'm copying them verbatim below:

  • IWRunNode vs MultiValuedRunNode: The biggest differences between the two classes lie in code which was commented out and I deleted. Other than that, the real difference (and subtle) I see is that the attribute _nov_2_pairs needs to be defined differently in either class. For the former, is a table of pairs of AtomIdx, in the latter a pair of Width1Tuple a type which is imported from lapkt::novelty and defined around the language type we represent our datums (FSFeatureValueT)..

  • SimulationEvaluator vs MultiValuedSimulationEvaluator: the major difference between the two classes is in the implementation of the method reached_atoms(). In both classes it is defined as const, but it has different return types: std::vector<bool> vs. std::vector<Width1Tuple>. This is important: the former assumes datums are booleans, the latter assumes datums to be FSFeatureValueT. Also a convenience accessor to the set of features associated with the evaluator has been added to the interface of MultiValuedSimulationEvaluator.

  • IWRun::Config vs MultiValuedIWRun::Config: the attribute mark_negative has been removed from the latter as it doesn't make sense in the context of the second class. Three new attributes have been added to the latter, _R_file and _log_search activate the code that allows to load the R set from a file, and the code that logs the IW search tree into a JSON document. _filter_R_set is no longer used: I did some experimentation with doing some "post processing" on the R-set, without much success.

  • MultiValuedIWRun::DeactivateZCC class added to track option to ignore state constraints during the IW(k) lookahead. The term "zero crossings" refers to allowing IW to shoot through the "holes" in the state space created by state constraints.

  • IWRun vs MultiValuedIWRun: quite a few changes

    • The attribute _visited has been added to MultiValuedIWRun to store the set of states visited during the lookahead.
    • MultiValuedIWRun::mark_tuples_in_path_to_subgoal(): Important changes here, as first, datums are no longer boolean, and rather than accessing state variables directly, we use the feature set in the evaluator.
  • MultiValuedIWRun::mark_all_tuples_in_path_to_subgoal: Analogous changes to the above.

  • MultiValuedIWRun::compute_R_all(): Method has been disabled with an exception.

  • MultiValuedIWRun::compute_R(): Added code to load R-set from file.

  • MultiValuedIWRun::compute_plain_RG2(): Added check for goal_directed flag, if false, extract_R_1 is called.

  • MultiValuedIWRun::compute_plain_R1(): Changed return type to match that of the datums.

  • MultiValuedIWRun::load_R_set(): Loads the R set from a file.

  • MultiValuedIWRun::filter_R_set(): Experiments with R-set filtering, no longer used.

  • MultiValuedIWRun::compute_coupled_features(): Added toIWRun because of static polymorphism. Method used in experiments filtering R-set, no longer used.

  • MultiValuedIWRun::dump_R_set(): Code to dump the R-set to a file.

  • MultiValuedIWRun::extract_R_1(): Changed return type to match that of datum.

  • MultiValuedIWRun::compute_R_g_prime(): $R_{all}$ fallback disabled.

  • MultiValuedIWRun::compute_adaptive_R(): Changed return type to match that of datum.

  • MultiValuedIWRun::extract_R_G(): Added code to dump R set to file, changed return type to match that of the datum.

  • MultiValuedIWRun::extract_R_G_1(): Idem as MultiValuedIWRun::extract_R_G().

  • ```MultiValuedIWRun::run()``: added code to check if IW(k) lookahead needs to respect state constraints, uses this info to influence the behaviour of the applicable action iterators.

  • ```MultiValuedIWRun::report()``: added code to store the search tree that follows from IW(k) into a file.

Summary

The filtering of R sets and its associated methods need to be moved out of the class. I would like to have them somewhere, as a snippet of some sort and a note, since the idea of post-processing R-sets may have some potential.

The rest of the changes are related to switching the underlying datum used to compute w(s) from bool to FSFeatureValueT (which is unsigned), and that we need to use the features to obtain the relevant valuations.

Double Free involving APTK h_{FF}

While profiling the Match Tree, I have come up with the following stack trace (reported by callgrind)

==9420== Process terminating with default action of signal 6 (SIGABRT)
==9420==    at 0x6E70428: raise (raise.c:54)
==9420==    by 0x6E72029: abort (abort.c:89)
==9420==    by 0x6EB27E9: __libc_message (libc_fatal.c:175)
==9420==    by 0x6EBAE09: _int_free (malloc.c:5004)
==9420==    by 0x6EBE98B: free (malloc.c:2966)
==9420==    by 0x4DE0530: std::_Sp_counted_ptr_inplace<aptk::agnostic::Relaxed_Plan_Heuristic<aptk::agnostic::Fwd_Search_Problem, aptk::agnostic::H1_Heuristic<aptk::agnostic::Fwd_Search_Problem, aptk::agnostic::H_Add_Evaluation_Function, (aptk::agnostic::H1_Cost_Function)1>, (aptk::agnostic::RP_Cost_Function)1>, std::allocator<aptk::agnostic::Relaxed_Plan_Heuristic<aptk::agnostic::Fwd_Search_Problem, aptk::agnostic::H1_Heuristic<aptk::agnostic::Fwd_Search_Problem, aptk::agnostic::H_Add_Evaluation_Function, (aptk::agnostic::H1_Cost_Function)1>, (aptk::agnostic::RP_Cost_Function)1> >, (__gnu_cxx::_Lock_policy)2>::_M_dispose() (bit_set.hxx:32)
==9420==    by 0x4D8FA23: fs0::bfws::LazyBFWSDriver<fs0::SimpleStateModel>::~LazyBFWSDriver() (shared_ptr_base.h:150)
==9420==    by 0x4D88D43: fs0::drivers::EngineRegistry::~EngineRegistry() (registry.cxx:62)
==9420==    by 0x6E74FF7: __run_exit_handlers (exit.c:82)
==9420==    by 0x6E75044: exit (exit.c:104)
==9420==    by 0x6E5B836: (below main) (libc-start.c:325)

ugly, ugly, as we say in Spanish. Not very clear to me just yet what is exactly causing it... perhaps an interaction between reference counting a la C++11 and some old style code (i.e. aptk::bitset?). Anyways, it seems to me to be a "bug" in APTK.

Specialize data structures and algorithms to conjunction-of-atoms formulas

The current Formula data structures are too ineffective when using width-based algorithms that require that the same formula be evaluated over and over again a large number of times. We should see how we can specialize this for all those large amount of problems where all formulas are conjunctions of simple atoms. Most of the time now is spent evaluating recursively the formulas, whereas a simple specialization that efficiently stores the necessary information in a std::vector<Atom> or similar, would be much more efficient when dealing with the evaluation.

Additional note: the UnsatisfiedGoalAtomsHeuristic should also be specialized to handle this.

Slowly decoupling core from hybrid...

Hi @miquelramirez ! This is related to issue #91 .

I'm trying to completely decouple dynamics, hybrids, etc. from the main planner.
I think that I'm pretty much done except for two places where core still has a dependency on hybrid stuff:

  1. In the search helper, in Utils::SearchExecution (This is a new class I've just created),
    when we find a plan, we want to do some extra checks in case if the problem has continuous change.
    These are pretty simple, and currently in Utils::on_plan_found.
    Do you think it is feasible to aim at a design where the core module has no knowledge about the hybrid module?
    i.e. in this case we would not be able to create a dynamics::HybridPlan object from a source file in the core
    module.

  2. In the different search setups, in core/search/drivers/setups.cxx, where the WaitAction is added.
    This seems a bit more tricky, but still I think that we should consider the WaitAction as not pertaining to a core of the planner.

A possibility e.g. for the first case could be that there is an alternative HybridSearchExecution class which subclasses SearchExecution and redefines the on_plan_found method.
Another possibility, if we think that this is not too intelligent, design-wise, is to have a different module SearchWrappers which sits on top of core / hybrid, etc.

Any thoughts? Not urgent, no need to sync the code, etc., for the moment I've commented out the few relevant lines of code to move forward and have the whole thing compiling, but this of course needs to be reenabled ASAP whenever we decide how to do it.

Stressing Lifted Planning driver

Hello,

I tried to run FS with the lifted driver on the first instance of this very interesting benchmark:

http://www.cs.ryerson.ca/~mes/publications/organicChemistrySynthesisBenchmarkPDDL.zip

published on this site. See the publications mentioned on this second link for details on the origins of the model.

FS is crashing with the following exception

Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Loading problem data
Number of objects: 121
Number of state variables: 42483
Number of action schemata: 52
Number of (perhaps partially) ground actions: 0
Number of state constraints: 0
Number of goal conditions: 66
Starting search...
terminate called after throwing an instance of 'Gecode::Int::VariableEmptyDomain'
  what():  IntVar::IntVar: Attempt to create variable with empty domain

and the following stack trace

#0  0x00007ffff52ab428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#1  0x00007ffff52ad02a in __GI_abort () at abort.c:89
#2  0x00007ffff58e484d in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#3  0x00007ffff58e26b6 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#4  0x00007ffff58e2701 in std::terminate() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#5  0x00007ffff58e2919 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#6  0x00007ffff5c38fc9 in Gecode::IntVar::IntVar (this=0x7fffffffd2a0, home=..., ds=...) at gecode/int/var/int.cpp:55
#7  0x00007ffff79c8f12 in fs0::gecode::Helper::createVariable (csp=..., typeId=23) at build/debug/src/constraints/gecode/helper.cxx:45
#8  0x00007ffff79c8cc0 in fs0::gecode::Helper::createTemporaryVariable (csp=..., typeId=23) at build/debug/src/constraints/gecode/helper.cxx:20
#9  0x00007ffff79cc38e in fs0::gecode::CSPTranslator::registerExistentialVariable (this=0x2b21fd0, variable=0x2b2b3e0) at build/debug/src/constraints/gecode/csp_translator.cxx:51
#10 0x00007ffff79d5a90 in fs0::gecode::BoundVariableTermTranslator::registerVariables (this=0x2830d60, term=0x2b2b3e0, translator=...) at build/debug/src/constraints/gecode/translators/component_translator.cxx:44
#11 0x00007ffff79de98a in fs0::gecode::BaseCSP::registerTermVariables (term=0x2b2b3e0, translator=...) at build/debug/src/constraints/gecode/handlers/base_csp.cxx:22
#12 0x00007ffff79dedba in fs0::gecode::BaseCSP::register_csp_variables (this=0x2b21e60) at build/debug/src/constraints/gecode/handlers/base_csp.cxx:96
#13 0x00007ffff79df0fb in fs0::gecode::BaseCSP::createCSPVariables (this=0x2b21e60, use_novelty_constraint=false) at build/debug/src/constraints/gecode/handlers/base_csp.cxx:136
#14 0x00007ffff79f6548 in fs0::gecode::BaseActionCSP::init (this=0x2b21e60, use_novelty_constraint=false) at build/debug/src/constraints/gecode/handlers/base_action_csp.cxx:35
#15 0x00007ffff79f4123 in fs0::gecode::LiftedActionCSP::init (this=0x2b21e60, use_novelty_constraint=false) at build/debug/src/constraints/gecode/handlers/lifted_action_csp.cxx:41
#16 0x00007ffff79f3e77 in fs0::gecode::LiftedActionCSP::create_derived (schemata=std::vector of length 52, capacity 64 = {...}, tuple_index=..., approximate=false, novelty=false)
    at build/debug/src/constraints/gecode/handlers/lifted_action_csp.cxx:27
#17 0x00007ffff79ae3e8 in fs0::drivers::FullyLiftedDriver::setup (this=0x7fffffffd6af, config=..., problem=...) at build/debug/src/search/drivers/fully_lifted_driver.cxx:47
#18 0x00007ffff7968cfa in fs0::drivers::SearchUtils::instantiate_seach_engine_and_run (problem=..., config=..., driver_tag="lifted", out_dir=".", start_time=0.0199999996) at build/debug/src/search/search.cxx:84
#19 0x00007ffff7967377 in fs0::drivers::Runner::run (this=0x7fffffffd9a0) at build/debug/src/search/runner.cxx:33
#20 0x0000000000403e78 in main (argc=3, argv=0x7fffffffdb88) at main.cxx:13

Looks like my assumption that STRIPS is a subset of FSTRIPS holds, but there is some implementation detail when the state variables are boolean? Note that the domains could be easily recoded with functions replacing the predicates (bond ) and (doublebond). But anyways, I think it will be healthy to get to the bottom of this.

[MINOR] Change the --run flag of the python script

By default, the run.py script should run the planner, and we should have a --compile-only flag which prevents the actual execution of the backend.
Keep in mind that this changes will affect experiment generation scripts, etc.

FS Plans Over Processes

Integrate processes into the existing planning models:

  • Create the wait action if applicable (i.e. "natural" actions in the model)
  • Integrate numeric integrators
  • Implement the transition function

Grounding of Functional STRIPS Benchmark "simple-sokoban-fn" broken?

After syncing the definitions of externally defined symbols to the new interfaces - see PR - I find that FS aborts at some point during the grounding with instance 6_3 of the simple-sokoban-fn.

Steps to reproduce:

  1. Apply PR on the fs-benchmarks repo
  2. Run command
python3 preprocessor/runner.py --tag foo --instance $BENCHMARKS/simple-sokoban-fn/instance_6_3.pddl --run --driver=smart

Sorry for not being able to investigate further, I have to look into other stuff.

ASP Grounder: Check concurrency issues when several

When several processes are writing to the same text file where the LP is dumped (currently always named ipc.pre) for the ASP solver to solve it, there'll likely be concurrency issues.
This will be a problem once we want to run experiments in a cluster.
The output file should be set into the results file, or a similar instance-specific location

Optimize Novelty Tables

As of now, novelty tables are optimized/specialized for tuples of length one and two. The code however could be much more performant if we also specialize it for the case where all novelty features are given by the values of state variables, i.e. tell whether an atom exists in the state or not (no additional fancy features).
If we assume that, which is something that can be assumed for the experiments we're running ATM with IPC benchmarks, we can then optimize the code in the following ways:

  • For the length-1 tables, _width_1_tuples needs to be a std::vector of length k, where k is the number of atoms given by our AtomIndex. _width_1_tuples[i] will then be true if the atom with index i has already been seen in the search. I expect this to have a significant performance impact when compared to the current strategy of having an unordered_set of tuples, etc.

  • Likewise for length-2 tables, where the simplest case would be to have an unordered_set<pair<AtomIdx, AtomIdx>> or, if the number of atoms is low enough, perhaps conflate the two indexes into one and have an unordered_set<long>.

Any thoughts on this, @miquelramirez ?

Undo APTK-FF integration?

Eventually, we might want to undo (or leave in a separate branch?) the integration with APTK-FF, meaning:

  • The inclusion of Nir's AAAI code within our codebase
  • The inclusion of APTK libraries and include paths in the build process.
  • The code performing the conversion between APTK and FS states (we should definitely store a pointer to the relevant commit in the wiki, or elsewhere)

What do you think, @miquelramirez?? This is not urgent, just meant to be a reminder so that that code does not remain there forever without being used...

Configuration ```--driver bfs``` really wants to use the Match Tree

When using FS with a configuration involving the driver bfs, the applicability manager chooses to use the Match Tree strategy, even when there are non predicative symbols on the preconditions of actions. More aggravatingly, it ignores --options successor_generation=naive because, apparently, it knows best :o)

Datatype Extension: Rational Numbers

Tasks:

  • Refactor StateAtomIndexer to support the new type (float)
  • Add to ObjectTypes the entry FLOAT for rational numbers
  • Implement loading of FLOATs
  • ...

Use ASP-based grounder with FSTRIPS problems

We should strive to use as much as possible of the ASP grounder to process FSTRIPS problems as well.
For simple uses of functions, e.g. the functional version of BW, this should be possible, as long as we transform functions such as loc(a)=b to equivalent predicates loc_(a,b), etc., possibly using existential quantification when nested terms are involved, etc.

For other, more complex uses, perhaps we should keep using the standard grounder.

Full Support for Derived Predicates

The name says it all. We have support for externally-defined functions and predicates, the latter of which can be viewed as "block-box" derived predicates, but we should give full support to standard derived predicates as well.

Refactoring Code Structure

@miquelramirez , I'm trying to figure out which of the novelty features here could/should be moved to a new location modules/hybrid that I'm creaing (don't look for that, only in local ATM). The very name modules/hybrid is tentative, but essentially I wanted to put there all code related with soplex. Feel free to suggest a better name, or to suggest splitting that into two or more conceptually differentiated modules, and I'll do that. In any case, for instance the elliptical_2d.* code will end up there, because it uses soplex, etc. etc.
I will probably have some "feature wrapper" code in modules/hybrid that conditionally includes soplex, and throws a runtime error if both

  • the use_soplex flag was not used during compilation, and
  • the user is trying to use a soplex-related option in the command line, such as e.g. features.elliptical_2d, etc.

Thoughts?

Converting FD front-end "mutex groups" into FSTRIPS functions

We have been discussing the possibility of acquiring more succinct representations of the IPC benchmarks by re-using FD's powerful invariant inference components. The basic idea is that for every mutex group M that FD uncovers (set of ground predicates), we define automatically the tuple < f_{M}, t_{M}, M> where f_{M} is a function, t_{M} is a type, and M become a set of constant objects of type t_{M}.

TODO

[ ] Work out the TODO list

Expose the gringo parser in a cleaner manner

The (as of now, optional) use of the ASP-based parser by the frontend should be slightly refactored so that:

  • A system-wide clingo instalation is used, instead of looking for a binary in a subpath of the FS directory. If a system-wide version or, if necessary, some version of Clingo pointed out to by, say, a certain environment variable, is found, then the option to ground the problem with Clingo is made available. Otherwise, it is not, and an exception is thrown if the user specifies the --gringo flag, etc. (perhaps the flag name should be changed to hide the internals of what we use to solve the LP?)

  • A bit more thought should be given to what we want to do when the problem is a FSTRIPS problem. FSTRIPS models usually suffer less with grounding, as actions are much more compact, but still we may benefit of the reachability analysis now performed with LP?

  • Ultimately, it doesn't make much sense to keep two different PDDL parsers. The goal here would be perhaps to get rid of one of the two... ideally the FD parser, since that would mean one less dependence.
    (Note: @miquelramirez 's "smart" parser also introduces a dependence to pyparsing, but of course that's a much cleaner dependence than a dependence to a patched, old, and no longer maintained version of FD codebase). Of relevance here also: Anders has a C++ PDDL parser which he is actively maintaining and pushing forward. As of now, though, I wouldn't even suggest to try to port all the work of generating the LP to a different programming language, much less to C++ :-)

Distinguish between static and dynamic variables that might derive from the same logical symbol

(C&P from elsewhere:)

Say I'm grounding the move action in visitall with grounding move(A,B), and it turns out that B is the initial position of the agent. This means that visited(B) is detected by the reachability analysis as statically true, ergo visited(B) is not a state variable.
This is provoking some errors right now, because when grounding the action, I try to recover the ID of the state variable visited(B) to create and effect visited(B) = true, but since that is not a state variable, that effect should be pruned. Agreed?

(I say this is subtle because up til now I was only considering that a predicate is or is not static, not that certain state variables derived from a predicate might be static and others not)

Decoupling FS from BFWS: Roadmap

We need to completely decouple de BFWS engine from the rest of elements in the FS planner.
This needs to be done along the lines outlined in the IJCAI'17 submission: the BFWS component
only interface is a simulation-based one, in which only black-box implementations
of the set of actions A, the set of actions A(s) applicable to a given state,
and some information about the structure of the state (i.e. number and type of variables)
and of the goals (i.e. a decomposition of the set of goals G into possibly-overlapping
sets of subgoals G_i) is needed.

We need a clean, well-documented, and ideally 0-overhead interface.
As a tentative roadmap, the following current components of FS should fall within "FS user space".

  • All of the Python front-end
  • The FSRIPS language component
  • The "applicable-action" iterators
  • The hybrid state (?)
  • All of the Gecode-based components. The BFWS component needs to be completely independent of Gecode.

And the following current components of FS should fall within the "BFWS component / library":

  • The novelty evaluators
  • The novelty-feature interfaces
  • The IW(k)
  • The width-based GBFS search engines.

Of course this last component will have a distinct namespace.
Not sure if it makes sense to wholle integrate it inside of LAPKT or it'd be better of
as a distinct library (or perhaps sub-library) on top of LAPKT.
Special care should be taken to ensure that the components are well-tested, the
physical design is as good as possible to ensure fast compilation times.

ASP Grounder: Fix order for action schemas and groundings

We'll want to return the action schemas in the same order provided by the user in the PDDL domain
(as is done with the normal grounder), and we'll also want to return the valid groundings for each
action schema in lexicographical order, or whatever order is used in the normal grounder.
This way comparisons between the two of them are more meaningful.
Shouldn't be too complicated.

Simplify State Models

Currently there are three different state models: "ground", "lifted", and "simple". The latter aimed at being a replacement for the other two, but never made it there. Either we find some purpose for it, or get rid of it and return to the old distinction between ground and lifted,

Throw out-of-range exceptions when a state variable is assigned a value outside its declared range

Currently, for range variables, whenever the application of an action results in some (range) state variable taking a value outside its declared range, the action is implicitly considered as non-applicable.
This is useful, but too often leads to subtle model bugs that are hard to debug. A better approach would be to leave to the modeler the responsibility of keeping those variables well within its range. Whenever an action which is considered applicable (its preconditions being satisfiable) results in an out-of-range value, a runtime out-of-range exception should be thrown.

Extend Match-Tree to deal with multivalued variables

Meaning that it would support a small but nonetheless useful fragment of FSTRIPS.
Namely, only problems where the preconditions of all actions is a conjunction of FOL equality atoms involving at most one state variable, i.e. of the form X=c, would be supported.
We could actually lift the restriction of atoms being equality atoms and give support to atoms such as X<c, etc., but I'd leave that for a future implementation.

Check Match-Tree performance

I don't have much time to check this now, but we should perhaps have a look at why the match tree successor generation strategy (see #6) is not outperforming the naive one? Is that to be expected?

In some domains, it indeed clearly outperforms it, but in others it is exactly the other way roung. This deserve further examination.

ASP Grounder fails on Barman

I found a (minor?) bug in the ASP-based grounder. It is visible e.g. with problem barman-sat11-strips/pfile06-021.pddl. The groundings that result from action schema clean-shaker(h1: hand, h2: hand, s: shaker) include ground actions such as clean-shaker(left, left, shot5), where shot5 is not a shaker, but an object if sibling type shot.

I've inspected the source of the grounder, the problem lies in the set of groundings returned by the LP solution. Inspecting a bit the LP generated by the grounder, my intuition is that the problem lies in the fact that the predicate empty is defined over objects of type container, and this propagates up to the action lean-shaker`. I see that "type" predicates are defined in the LP, but used nowhere. Perhaps extra RHS predicates should be used in the rules that define when an action is reachable?? e.g. for clean shaker the current rule is:

reachable_a(clean__shaker(A, B, C)) :- reachable(c67(A, C, B)).

And perhaps should be:

reachable_a(clean__shaker(A, B, C)) :- reachable(c67(A, C, B)), hand(A), hand(B), shaker(C).

Let me know what you think, Miquel.

Fix hashing of Term and Formula subclasses

The specialization of std::hash and std::equal_to for Term and Formula subclasses, which is defined in terms.hxx, etc., is assuming that these functions can be specialized for Term*, and that specialization will apply to pointers to subclasses of Term as well, e.g., when we define an std::unordered_set<const Constant*>. This is not the case, and hence we should revise all the cases where such data structures are being defined on a class other than Term, because they are likely not performing as expected (i.e. the standard hash/equal_to functions are defined, which compare the pointer address.

We also need to check whether the equality and hashing operators for subclasses of Formula/Term are well defined. In particular, those for the classes Constant and BoundVariable.

bfws-v1.0-beta-4 compilation errors

I can't compile bfws-v1.0-beta-4, g++ reports the following error:

.build/prod/src/problem_info.cxx: In member function ‘void fs0::ProblemInfo::loadTypeIndex(const Value&)’:
.build/prod/src/problem_info.cxx:237:48: error: ‘type_id’ is not a class, namespace, or enumeration
     typeObjects[type_id].push_back(make_object(type_id::object_t, value));

??

Better Debugging Output from the Front-End

The Python front-end should output not only the current json specifications, but also human-readable equivalents. For instance, for state variables, it should output a file state-variables.txt with a list of all state variables with their ID:

0: loc(b1)
1: loc(b2)
....

Simpler FSTRIPS ASTs

The current FSTRIPS language is too involved and because of this we're having endless bugs over and over (well, because of this and because our automated tests are a bit rudimentary yet...).

There's some ongoing work to simplify this, I'll just open this issue as a big reminder and as a place to index all of the issues that are related to / would be strongly affected by this.

ATM these issues are, at least: #18, #51, #56, #57.

ASP Grounder fails (again) on Maintenance

This is due to a larger bug - in the ASP grounder preprocessing, the extension of static symbols is not being output to the "data" directory, and hence the backend can't properly load. In the case of Maintenance, this results in the backend thinking that there's no true at(apX, X) predicate.

Specialize for Classic STRIPS Operators

If we assumed all problems we have to deal with are propositional; all operators are given in the classic "set-theoretic" STRIPS representation, and we further assume that checking for preconditions and generating the next state is the bottleneck, as it is with BFWS and width-based algorithms, what would be a more aggressive degree of optimization?

State Representation

There are three highly-engineered options that I would test, in increasing degree of sophistication=complexity:

  1. Using std::vector<bool>s,
  2. using
    std::vector<uint_64>s and managing the bitwise operations ourselves. For instance, a state with 112 state variables will require two u_int64s.
  3. using static std::bitset<N>s and compiling per-instance executables specific to the number N of fluents. For instance, for a problem with 112 fluents, we will compile a binary where the declaration
    of the state contains a single member std::bitset<112>. For a modern architecture with 64-bits, this gets transformed under-the-hood into an array of two longs; the source code is pretty well-designed and self-explanatory.

Running preliminary tests on this without a "full-scale" implementation/refactoring should be easy.

Operator Representation

Whatever the option above chosen, what should be the representation of operators?
This is discussed in some old email with miquel and hector which I cannot find right now...
Operators get compiled into three "extended" bitmaps with the same implementation than the state (i.e. a direct set-theoretical implementation, with sets represented as bitmaps): the pre, the add and the del bitmap. All operations described below assume there is one single bitmap, i.e. #fluents <= 64, but can be extended trivially to larger states (in an easily parallelizable manner, btw). Assume we have an operator o and a state s

  1. The pre bitmap is the bitmap-encoding of pre(o) seen as a set. a is applicable in s iff:
    a.pre & s == a.pre
  2. The add bitmap is the bitmap-encoding of add(o) seen as a set, and the del bitmap is the complement of the bitmap-encoding of del(o) seen as a set. The successor state s' = f(a,s) is:
    ((s & a.del) | a.add)

This is of course the trivial, direct implementation of the STRIPS semantics, nothing revolutionary.
One important thing to keep in mind is how this potentially interactuates with our match-tree implementation. For problems with small number of ground actions, we might want not to use the match tree at all but rather iterate through all actions, etc.

Usability (at least for v2 branch)

Hi all,

assume I am a random user who wants to try out the planner and has had some previous experience in planning:). Is it possible that I cannot find the way to make it run on the examples provided?
It would be nice to have a hello world kind of example where things run.

My story

I have installed everything following the instructions, and I have got finally 0 errors/warnings, but for something I spotted which is "hybrid" disabled. Anyway, since I haven't encountered any alarm, I guess it is ok, since hybrid can mean everything and nothing at the same time from my position now.

Reading the instructions it seems there is no problem in which search strategy I use, so I run the following

./run.py -i examples/hybrid/continuous/walkbot/instance_001.pddl --driver sbfws --options "bfws.rs=sim"

and I get this:

Problem domain: walkbot Problem instance: instance_001 Working directory: /home/enrico/fs-private/workspace/180202/walkbot/instance_001 Traceback (most recent call last): File "./run.py", line 17, in <module> runner.main(sys.argv[1:]) File "/home/enrico/fs-private/python/parser/runner.py", line 287, in main return run(parse_arguments(args)) File "/home/enrico/fs-private/python/parser/runner.py", line 266, in run fd_task = parse_pddl_task(args.domain, args.instance) File "/home/enrico/fs-private/python/parser/runner.py", line 64, in parse_pddl_task task = tasks.Task.parse(domain_pddl, task_pddl) File "/home/enrico/fs-private/python/parser/pddl/tasks.py", line 49, in parse = parse_domain(domain_pddl) File "/home/enrico/fs-private/python/parser/pddl/tasks.py", line 175, in parse_domain action = actions.Action.parse(entry) File "/home/enrico/fs-private/python/parser/pddl/actions.py", line 30, in parse assert action_tag == ":action" AssertionError

Is it normal:)?

Enrico

Non-goal states detected as goal states

@miquelramirez, we've seen that in some domains non-goal states are being detected as states. I've traced the problem down to this line here in a commit of yours, and it looks like a bug / typo.
Perhaps you intended something different here? What is the intended behaviour? Looking at the code, _best_found can be any non-solution node, so I'm not sure what this call to extract_plan is supposed to achieve?

LAPKT-private or LAPKT-public

void insert(const NodePtrT& node) override {

I had to change the definition of insert to return bool:

	bool insert(const NodePtrT& node) override {
		if ( node->dead_end() ) return false;
		this->push( node );
		already_in_open_.insert( node );
		return true;
	}

Miquel recommended the private branch of LAPKT for beta5

https://github.com/aig-upf/lapkt-base-private/blob/v2-work/src/lapkt/search/interfaces/open_list.hxx#L42

But beta7 assumes that we use the public branch of LAPKT

https://github.com/LAPKT-dev/LAPKT-public/blob/v2-work/aptk2/search/interfaces/open_list.hxx#L42

Which one should I use? I can commit the change edit the readme to point to LAPKT-private

Functional / Unit Testing

An (tentative) list of features / things we want to test:

  • external procedures
  • conditional effects
  • existential quantification
  • universal quantification
  • axioms
  • nested terms
  • static symbols
  • negation in atoms
  • (include some valgrind-based test for memory leaks?)

"Port" BFWS back to the beautiful world of functions

Chasing Nir's planner performance has resulted in a number of changes aimed at optimizing the performance of FS/BFWS with propositional domains. We need to ensure that the planner still works as expected when functional domains are used. This includes:

  • Making sure that the planner can be instantiated with non-binary states, non-binary novelty features, etc.
  • Making sure the ASP-based grounder does not get in the way of the parsing of a functional problem (or better still: adapt the grounder to the functional world!)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.