Git Product home page Git Product logo

quark's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

quark's Issues

D-Wave annealing devices removed from Amazon Braket.

In late 2022 all D-Wave devices got removed from Amazon Braket. Attempting to run Annealing or QBSolv with these devices in QUARK will now fail. It has to be evaluated whether the now available D-Wave offerings in the AWS Marketplace are an equivalent alternative or the support for D-Wave devices has to be removed from QUARK for now.

Seperated Post-Processing

QUARK should be able to combine results from several different runs in a single plot, using the results.csv as input

regarding pylint R1728 consider-using-generator

(from #55)

src/modules/applications/optimization/PVC/PVC.py, method validate(...):
Is there a reason why you do # pylint: disable=R1728 (consider-using-generator), instead of using the generator as apparently suggested by pylint?
And what is the reason to use list(set(list([...]))) in the first place?

# pylint: disable=R1728 is used again in src/modules/solvers/GreedyClassicalPVC.py and in src/modules/solvers/ReverseGreedyClassicalPVC.py (both times in the method run(...)). The same question applies here.

Regarding # pylint: disable=R1728: Yes it would be nicer to move these lines to generators.

In the last Open Call, Marvin told me that he'd like to discuss this further to make sure we don't screw anything up.
Mostly the point is about the list(set(list([...]))), which uses a generator to create a list ([...]), only to wrap it in an explicit list(...), followed by a set(...) and yet another list(...). The full line is:

visited_seams = list(set(list([seam[0][0] for seam in solution if seam is not None])))  # pylint: disable=R1728

As far as I can see, seam contains the values ((seam, node), config, tool) (maybe we should document that better btw), meaning seam[0][0] gets the visited seam of the PVC process.
Following that line of code, only the length of visited_seams is used to determine whether all seams got visited.
So, the purpose of this line is to extract all unique visited seams to then count how many there are.

Since you can also get the length of a set, the following line is much shorter, but equivalent in result:

visited_seams = {seam[0][0] for seam in solution if seam is not None}

({...} creates a set.)

Is there anything I overlooked? If not, I'll create the PR in the next days.

Error during benchmark run: This operation is not supported for complex128 values because it would be ambiguous.

There seems to be a bug somewhere in the code when running this configuration (full config below):

TSP -> Ising -> qiskit -> QAOA -> Powell

At the end of the run, a TypeError is thrown:

TypeError: ufunc 'signbit' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

directly causing:

TypeError: This operation is not supported for complex128 values because it would be ambiguous.

(see Traceback at the end of logger.log)
As far as I can see, some complex value(s) end(s) up somewhere it/they shouldn't, but I have no idea how or why.
I clicked a bit through the lines mentioned in the Traceback, but couldn't find what's causing the problem.

full config:

application:
  config:
    nodes:
    - 3
  name: TSP
mapping:
  Ising:
    config:
      lagrange_factor:
      - 1.0
      mapping:
      - qiskit
    solver:
    - config:
        depth:
        - 3
        opt_method:
        - Powell
        shots:
        - 10
      device:
      - LocalSimulator
      name: QAOA
repetitions: 1

device_name in QUARK2

The constructor of Device requires 'device_name' as argument. The other components have no required constructor argument. This is so in QUARK2 still. This sometimes causes trouble because a Device must be handeled different than the other components.

I would like to have a consistent behaviour in QUARK2. Either to have a 'name' argument in Core so that every QUARK module has a name or to remove the device_name from Device.
I would prefer the first option - it seems natural to me that each component has a name. In this case the name should be the name given in the QUARK modules configuration.

data loss on keyboard interrupt

Hi together,

if I run a QUARK configuration with many repetitions and I interrupt the QUARK run with CTRL-C the results of the iterations already done get lost.

This is because the json.dump which comes after the repetitions loop is not done in this case. One solution could be to do this json.dump also in the "except KeyboardInterrupt" section.

best wishes,
Jürgen

unintuitive order of the questions at the start of a benchmark run

How it is currently:

  1. Which application? and how to build a problem instance? (->SAT, TSP, PVC,...)
  2. Which mapping do you want? (-> QUBO/Ising/...)
  3. Which solving method do you want to use and how do you want to configure it? (->Annealer, QAOA, Classical, ...)

Intuitively you would ask the questions in the following order: 1->3->2 because you want to know with which method you want to solve a problem before you think about the mapping

git_uncommitted_changes returned with string type hint, used where bool is expected

As we discussed in the last Open Call, here is the issue for the people that weren't there.

In utils.get_git_revision(...), git_uncommitted_changes is assigned a bool, unless something goes wrong. In that case, it is assigned "unknown". As a result of this, the method is typed to return two strings (first git_revision_number, then git_uncommitted_changes). git_uncommitted_changes is (as far as I can tell) only used in saving a benchmark run.

However, under BenchmarkManager.run_benchmark(...), it is used as a parameter of a new BenchmarkRecord object. The definition of the initializer of BenchmarkRecord expects a bool, not a str. Therefore, PyCharm gives me a 'wrong type' warning.

Should we change the type hint in BenchmarkRecord to a string or do something else?

Since we had some uncertainties about Python's dynamic typing, here's a short presentation of how it works. If a variable is assigned a bool, even if its type hint says str, is still is a bool and treated as such.
The type hints are exactly what they're called: type hints.

>>> def foo() -> str:
...     return True
...
>>> test = foo()
>>> type(test)
<class 'bool'>
>>> test
True
>>> test2: str = "test2"
>>> test2
'test2'
>>> type(test2)
<class 'str'>
>>> test2 = foo()
>>> test2
True
>>> type(test2)
<class 'bool'>

Implement Application Score in QUARK 2.0

In QUARK 1, there were "solution_validity" and "solution_quality", which were mandatory metrics to assess how well the optimization problem was solved. As these metrics cannot be applied to problem classes outside of the optimization realm, we decided to remove them in the first design of QUARK 2.

However, we believe it makes sense to introduce a similar metric called "application_score". The developers can use this optional metric to define a score for their application that can be used to compare different benchmark runs against each other on the application level.

For a given set of application score types, the BenchmarkManager then can provide some general functions that create some automatic plots, similar how it is currently done for the total_time metric in the Metrics class.

bug in summarize: solverConfigCombo not well defined.

summarize does not always recognize that two data sets contribute to the same curve although they should.
That happens because 'solverConfigCombo' is created from a dictionary without applying some sorting.

I think
df['solverConfigCombo'] = df.apply(
lambda row: '/\n'.join(
['%s: %s' % (key, value) for (key, value) in sorted(row['solver_config'].items(), key=lambda x:x[0])]) +
"\ndevice:" + row['device'] + "\nmapping:" + '/\n'.join(
['%s: %s' % (key, value) for (key, value) in sorted(row['mapping_config'].items(), key=lambda x:x[0])]), axis=1)

should solve the issue for solverConfigCombo (note the 'sorted').

'applicationConfigCombo' probably has the same issue but I have not checked that.

With best regards, Jürgen

Find problem instance corresponding to some result.

When doing some evalution of QUARK results I sometimes need to know the problem instance to which an entry of the results.json corresponds.
My application stores the problem as "problem_<rep_count>.json" in the store directory provided by the benchmark manager. Unfortunately I do not see an easy way to reconstruct the store directory for some given entry from results.json.

An easy way to solve this problem would be to let the benchmark manager write 'idx_backlog' in the results file.

Improve "Prerequisites" Section in Readme and Tutorial

The content of this section is sufficient to understand how to install QUARK but could be clearer about how to use modular configurations and could follow a more stringent style of description.

Priority: Low

Last time this section was adjusted: #67

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.