Jones Lab Hatlab |
|
evmckinney9 / transpile_benchy Goto Github PK
View Code? Open in Web Editor NEWCollection of existing quantum circuit transpilation benchmarking tools
License: MIT License
Collection of existing quantum circuit transpilation benchmarking tools
License: MIT License
Finish creating an interface for the rest of the submodules listed in the README. Then combine all submodules for a master list and remove duplicates.
No response
Currently metrics are computed taking circuit inputs post-transpilation.
I think it would be better to instead have metrics computed on the dag as final set of AnalysisPasses.
Avoids needing another circuit->dag conversion.
No response
Collect even more papers that have additional benchmarks. For example, Table II in SABRE https://arxiv.org/pdf/1809.02573.pdf
Check that this contains circuits we haven't included yet.
When a user initializes an interface, they should be notified of the circuit names that are available.
No response
Need to add error handling for when a benchmark is given interfaces with conflicting circuit names.
No response
There should be control in the creation of plots. An example is to be able to put two different metrics on the same graph.
No response
The logic of runners has become a mess. See https://github.com/Pitt-JonesLab/virtual-swap/blob/main/src/virtual_swap/pass_managers.py for proof.
Refactor this eventually :)
I originally wrote it so to minimize dag to circuit conversions, the pre-,main-,post- process stages only modify the pass manager but all run sequentially in the run function. I think this is harder for maintainability so we should instead just run the pass mangers separately.
Currently, the filtering is done in the SubmoduleInterface abstract class. This is silly, just move the filtering to be in the lower-level get() functions of the interfaces.
:)
Following refactor updates and new stats functionality, need to update all documentation
Changes from #8 changed repo that needs to be updated in docs.
Include nshots
parameter, rerun each circuit multiple time per transpiler.
Adjust plotting accordingly.
Previously a partial transition from circuit analysis to DAG analysis in metrics class was done. The refactor could be improved and code cleanup can be done specifically in various parts of the runner and benchmark classes.
print(benchmark)
Transpiler: Qiskit-$\sqrt{\texttt{iSWAP}}$
Metric: accepted_subs
Circuit: adder_n4 Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: ae Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: dj Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: fredkin_n3 Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: qaoa Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: qft Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: qftentangled Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: qgan Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: toffoli_n3 Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Metric: monodromy_depth
Circuit: adder_n4 Mean result: 6.500 Trials: [6.5, 6.5, 6.5, 6.5, 6.5]
Circuit: ae Mean result: 36.450 Trials: [38.0, 33.5, 38.0, 38.0, 35.0]
Circuit: dj Mean result: 13.041 Trials: [13.0, 14.5, 14.5, 12.0, 11.5]
Circuit: fredkin_n3 Mean result: 10.500 Trials: [10.5, 10.5, 10.5, 10.5, 10.5]
Circuit: qaoa Mean result: 8.500 Trials: [8.5, 8.5, 8.5, 8.5, 8.5]
Circuit: qft Mean result: 29.667 Trials: [29.5, 28.5, 32.5, 29.0, 29.0]
Circuit: qftentangled Mean result: 43.582 Trials: [43.0, 42.0, 45.5, 43.0, 44.5]
Circuit: qgan Mean result: 29.896 Trials: [29.5, 29.5, 30.5, 29.5, 30.5]
Circuit: toffoli_n3 Mean result: 5.269 Trials: [5.0, 5.0, 5.0, 6.5, 5.0]
Transpiler: Qiskit-$\texttt{CNOT}$
Metric: accepted_subs
Circuit: adder_n4 Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: ae Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: dj Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: fredkin_n3 Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: qaoa Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: qft Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: qftentangled Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: qgan Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Circuit: toffoli_n3 Mean result: 0.000 Trials: [0, 0, 0, 0, 0]
Metric: monodromy_depth
Circuit: adder_n4 Mean result: 10.000 Trials: [10.0, 10.0, 10.0, 10.0, 10.0]
Circuit: ae Mean result: 75.507 Trials: [74.0, 74.0, 72.0, 83.0, 75.0]
Circuit: dj Mean result: 16.806 Trials: [14.0, 14.0, 19.0, 18.0, 20.0]
Circuit: fredkin_n3 Mean result: 15.000 Trials: [15.0, 15.0, 15.0, 15.0, 15.0]
Circuit: qaoa Mean result: 17.000 Trials: [17.0, 17.0, 17.0, 17.0, 17.0]
Circuit: qft Mean result: 59.960 Trials: [57.0, 63.0, 59.0, 59.0, 62.0]
Circuit: qftentangled Mean result: 85.764 Trials: [82.0, 89.0, 84.0, 87.0, 87.0]
Circuit: qgan Mean result: 28.871 Trials: [28.0, 41.0, 28.0, 24.0, 26.0]
Circuit: toffoli_n3 Mean result: 7.000 Trials: [7.0, 7.0, 7.0, 7.0, 7.0]
Transpiler: SABREMS-$\sqrt{\texttt{iSWAP}}$
Metric: accepted_subs
Circuit: adder_n4 Mean result: 0.000 Trials: [0.0, 0.0, 0.0, 0.0, 0.0]
Circuit: ae Mean result: 0.822 Trials: [0.25925925925925924, 0.9629629629629629, 0.9629629629629629, 0.9629629629629629, 0.9629629629629629]
Circuit: dj Mean result: 0.200 Trials: [0.0, 0.3333333333333333, 0.3333333333333333, 0.16666666666666666, 0.16666666666666666]
Circuit: fredkin_n3 Mean result: 0.500 Trials: [0.5, 0.5, 0.5, 0.5, 0.5]
Circuit: qaoa Mean result: 0.027 Trials: [0.0, 0.13333333333333333, 0.0, 0.0, 0.0]
Circuit: qft Mean result: 0.770 Trials: [0.3333333333333333, 0.9629629629629629, 0.9629629629629629, 0.9629629629629629, 0.6296296296296297]
Circuit: qftentangled Mean result: 0.794 Trials: [0.7941176470588235, 0.7941176470588235, 0.7941176470588235, 0.7941176470588235, 0.7941176470588235]
Circuit: qgan Mean result: 0.963 Trials: [0.9629629629629629, 0.9629629629629629, 0.9629629629629629, 0.9629629629629629, 0.9629629629629629]
Circuit: toffoli_n3 Mean result: 0.250 Trials: [0.25, 0.25, 0.25, 0.25, 0.25]
### Version
0
In https://github.com/Pitt-JonesLab/mirror-gates/blob/main/src/mirror_gates/pass_managers.py, I create a pass manager for Qiskit that does nothing. This should be a default runner. Goes with #8 , because if the only modificaiton is to post-process which means adding custom AnalysisPasses should be handled automatically.
Put it in a baseline file. Then the metric statistics utilities can auto compare against runners defined as baselines.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.