Git Product home page Git Product logo

microsoft / qdk-python Goto Github PK

View Code? Open in Web Editor NEW
114.0 23.0 84.0 6.07 MB

The azure-quantum python package submits jobs to the Azure Quantum service.

Home Page: https://learn.microsoft.com/azure/quantum/

License: MIT License

Dockerfile 0.12% Python 66.47% PowerShell 1.84% Q# 0.04% Shell 0.21% JavaScript 0.40% TypeScript 4.35% CSS 0.03% OpenQASM 26.55%
quantum azure azure-sdk-for-python python microsoft

qdk-python's Introduction

Build Status

Azure Quantum SDK

Introduction

This repository contains the azure-quantum Python SDK.

Use azure-quantum SDK to submit quantum jobs written in Q#, Qiskit, or Cirq to the Azure Quantum service:

  • azure-quantum PyPI version

Installation and getting started

To install the Azure Quantum package, run:

pip install azure-quantum

If using qiskit, cirq or qsharp, include the optional dependency as part of the install command:

pip install azure-quantum[qiskit]
pip install azure-quantum[cirq]
pip install azure-quantum[qsharp]

To get started, visit the following Quickstart guides:

Development

See CONTRIBUTING for instructions on how to build and test.

Contributing

For details on contributing to this repository, see the contributing guide.

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Note on Packages

While we encourage contributions in any part of the code, there are some exceptions to take into account.

  • The package azure.quantum._client is autogenerated using the Azure Quantum Swagger spec. No manual changes to this code are accepted (because they will be lost next time we regenerate the client).

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

qdk-python's People

Contributors

04diiguyi avatar adelebai avatar anatoliy-litvinenko avatar anjbur avatar anpaz avatar arthurkamalov avatar dbanty avatar dependabot[bot] avatar frtibble avatar haohaiyu avatar idavis avatar israelmiles avatar ivanbasov avatar jselig-rigetti avatar katymccl avatar kikomiss avatar kuzminrobin avatar ltalirz avatar masenol avatar microsoftopensource avatar msoeken avatar qci-amos avatar ricardo-espinoza avatar sanjgupt avatar scottcarda-ms avatar shenyingjun avatar tncc avatar tonybaloney avatar vxfield avatar xinyi-joffre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qdk-python's Issues

Job.get_results() returned by Workspace.list_jobs has wrong timestamp in authentication header

Workspace.list_jobs() returns Jobs with an authentication header with the wrong timestamp.

Steps to reproduce:

jobs = workspace.list_jobs()
jobs[0].get_results()

---------------------------------------------------------------------------
ClientAuthenticationError                 Traceback (most recent call last)
/tmp/ipykernel_24100/2000951921.py in <module>
----> 1 job.get_results()

/usr/local/lib/python3.7/site-packages/azure/quantum/job/job.py in get_results(self, timeout_secs)
    120             )
    121 
--> 122         payload = self.download_data(self.details.output_data_uri)
    123         results = json.loads(payload.decode("utf8"))
    124         return results

/usr/local/lib/python3.7/site-packages/azure/quantum/job/base_job.py in download_data(self, blob_uri)
    266         else:
    267             # blob_uri contains SAS token, use it
--> 268             payload = download_blob(blob_uri)
    269 
    270         return payload

/usr/local/lib/python3.7/site-packages/azure/quantum/storage.py in download_blob(blob_url)
    190     )
    191 
--> 192     response = blob_client.download_blob().readall()
    193     logger.debug(response)
    194 

/usr/local/lib/python3.7/site-packages/azure/core/tracing/decorator.py in wrapper_use_tracer(*args, **kwargs)
     81             span_impl_type = settings.tracing_implementation()
     82             if span_impl_type is None:
---> 83                 return func(*args, **kwargs)
     84 
     85             # Merge span is parameter is set, but only if no explicit parent are passed

/usr/local/lib/python3.7/site-packages/azure/storage/blob/_blob_client.py in download_blob(self, offset, length, **kwargs)
    846             length=length,
    847             **kwargs)
--> 848         return StorageStreamDownloader(**options)
    849 
    850     def _quick_query_options(self, query_expression,

/usr/local/lib/python3.7/site-packages/azure/storage/blob/_download.py in __init__(self, clients, config, start_range, end_range, validate_content, encryption_options, max_concurrency, name, container, encoding, **kwargs)
    347         )
    348 
--> 349         self._response = self._initial_request()
    350         self.properties = self._response.properties
    351         self.properties.name = self.name

/usr/local/lib/python3.7/site-packages/azure/storage/blob/_download.py in _initial_request(self)
    427                     self._file_size = 0
    428                 else:
--> 429                     process_storage_error(error)
    430 
    431             try:

/usr/local/lib/python3.7/site-packages/azure/storage/blob/_shared/response_handlers.py in process_storage_error(storage_error)
    175     try:
    176         # `from None` prevents us from double printing the exception (suppresses generated layer error context)
--> 177         exec("raise error from None")   # pylint: disable=exec-used # nosec
    178     except SyntaxError:
    179         raise error

/usr/local/lib/python3.7/site-packages/azure/storage/blob/_shared/response_handlers.py in <module>

ClientAuthenticationError: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:e00a2daf-e01e-0086-1d68-d324cc000000
Time:2021-11-06T23:48:40.1631830Z
ErrorCode:AuthenticationFailed
authenticationerrordetail:Signed expiry time [Fri, 05 Nov 2021 08:55:07 GMT] has to be after signed start time [Sat, 06 Nov 2021 23:48:40 GMT]

'Data for experiment ... could not be found.' error when a job fails

When fetching results for a Qiskit job that failed because the quota was exceeded, the AzureQuantumProvider throws an error:

Job id 5dc72a0a-3911-11ec-9598-00155db11bbe Job Status: job incurred error 
Result(backend_name='ionq.qpu', backend_version='1', qobj_id='Qiskit Sample - 3-qubit GHZ 
circuit', job_id='5dc72a0a-3911-11ec-9598-00155db11bbe', success=False, results=
[ExperimentResult(shots=1024, success=False, meas_level=2, data=ExperimentResultData(), 
header=QobjExperimentHeader())])-----------------------------------------------------------------------
---- QiskitError Traceback (most recent call last) /tmp/ipykernel_8079/1369783936.py in <module> 
11 print(result) 12 counts = {format(n, "03b"): 0 for n in range(8)} ---> 13 
counts.update(result.get_counts(circuit)) 14 print(counts) 15 plot_histogram(counts) 
/usr/local/lib/python3.7/site-packages/qiskit/result/result.py in get_counts(self, experiment) 278 
dict_list = [] 279 for key in exp_keys: --> 280 exp = self._get_experiment(key) 281 try: 282 header = 
exp.header.to_dict() /usr/local/lib/python3.7/site-packages/qiskit/result/result.py in 
_get_experiment(self, key) 390 391 if len(exp) == 0: --> 392 raise QiskitError('Data for experiment 
"%s" could not be found.' % key) 393 if len(exp) == 1: 394 exp = exp[0] QiskitError: 'Data for 
experiment "Qiskit Sample - 3-qubit GHZ circuit" could not be found.'

Clarify licenses used

Hi, from what I can see the repo is primarily licensed under MIT but the third party notice file includes many other packages that I don't directly see in here.

Are these the licenses of possible dependencies or where are they from? If they're pulled in when running pip install they shouldn't really be needed in here, I would assume. Or are they in here for a different reason?

Thanks!

Understanding the Python package hierarchy around qdk-python

I'm new to this set of packages, and from looking at the README it wasn't entirely obvious to me how they work together, so I thought it might be useful if I report here what I understood about the package hierarchy (with the goal of then updating the README to make it easier to grok for noobs like me):

  • azure-quantum provides functionality for submitting quantum circuits and problem definitions written in Python. It is independent of the other packages mentioned here.
  • qdk contains (only) qdk.chemistry, which provides tools for converting input formats between quantum chemistry codes and those needed by the simulators. It depends on qsharp.

@guenp Does that sound about right?

If yes, I will then create a follow-up PR to update some of the READMEs.

Some things that confused me

  • The readme mentions qsharp coming soon but it's already here (+ probably no need to mention it in this repository as it's developed elsewhere?)
  • The only use of the azure.quantum package in the example notebook appears to be to get the QDK version. Am I understanding correctly that there are currently two routes to submit simulations - one going through azure.quantum and one going through microsoft.quantum.chemistry? When should one use which?
  • The installation instructions for qdk say to simply pip install qdk and then mention to try the example jupyter notebook but that one requires the additional dependencies from the environment.yml (not just for rdkit but also for the jupyter_jsmol widget etc.)

Unable to use Qiskit features such as AmplitudeEstimation

I've been trying to play around with this example from Qiskit's docs. To submit the circuit to IonQ via Azure, I'm using these instructions.
When I get to the line ae_result = ae.estimate(problem), I end up getting the following error:

FAILURE: Can not get job id, Resubmit the qobj to get job id. Error: 'list' object has no attribute 'clbits' 

This is printed out in a loop until my computer runs out of RAM. If I run the code on IBM's online Quantum Lab, the code terminates with the error:

FAILURE: Can not get job id, Resubmit the qobj to get job id. Error: 'list' object has no attribute 'clbits' 
FAILURE: Can not get job id, Resubmit the qobj to get job id. Error: 'list' object has no attribute 'clbits' 
[...]
FAILURE: Can not get job id, Resubmit the qobj to get job id. Error: 'list' object has no attribute 'clbits' 
Traceback (most recent call last):
  File "/tmp/ipykernel_240/1492105537.py", line 1, in <module>
    ae_result = ae.estimate(problem)
  File "/opt/conda/lib/python3.8/site-packages/qiskit/algorithms/amplitude_estimators/ae.py", line 314, in estimate
    counts = self._quantum_instance.execute(circuit).get_counts()
  File "/opt/conda/lib/python3.8/site-packages/qiskit/utils/quantum_instance.py", line 774, in execute
    run_circuits(
  File "/opt/conda/lib/python3.8/site-packages/qiskit/utils/run_circuits.py", line 507, in run_circuits
    job, job_id = _safe_submit_circuits(
  File "/opt/conda/lib/python3.8/site-packages/qiskit/utils/run_circuits.py", line 694, in _safe_submit_circuits
    raise QiskitError("Max retry limit reached. Failed to submit the qobj correctly")
QiskitError: 'Max retry limit reached. Failed to submit the qobj correctly'

Is this the intended behavior? I'm able to run the circuit if I construct it manually via ae.construct_circuit(problem, measurement=True) and submit that to IonQ, but this way I can't use any of the nice post-processing provided by the wrapper.

Apologies if this isn't the right way to ask, but I have no idea if this is a Qiskit, Azure, or IonQ issue.

Follow Azure SDK for Python guidelines

We need to follow Azure SDK for Python guidelines, including:

  • Use track2 generated client (#54)
  • Support authentication with Azure Identity (#54)
  • Fix code styling (PEP8) issues (#56)
  • [] Update the CI pipeline to enforce style guidelines (being addressed by #50)
  • Address all open conversations/suggestions from PR #25 (make sure to expand all the hidden conversations in that PR)
  • Have unit tests to support running against live environment (using Azure ARM template for live test environment deployment)

Not all targets have corresponding Target class

In the August release we introduced the Target class that is used for submitting raw payloads to Azure Quantum targets. Target is a light-weight abstract base class that keeps track of the target name, input/output data formats and encoding, and has a convenient submit method to submit the payloads and return a Job instance.

https://github.com/microsoft/qdk-python/blob/5d424f36e88707d3ecb6277fe0eb0a9f31ff4105/azure-quantum/azure/quantum/target/target.py#L12

Currently, the only providers for which we implemented Targets are IonQ and Honeywell, and we have not implemented Microsoft QIO targets. Currently, these targets are implemented using the Solver base class.

https://github.com/microsoft/qdk-python/blob/5d424f36e88707d3ecb6277fe0eb0a9f31ff4105/azure-quantum/azure/quantum/optimization/solvers.py#L58

#125 implements changes to Workspace.get_targets to return a list of Target objects based on what targets are available in the associated subscription. This currently excludes the QIO targets.

To solve this issue, we suggest to refactor the Solver class to inherit from Target.

CC @anpaz

Workspace.get_targets() returns None for targets where no default Target class is specified

Problem: If no default target is specified, Workspace.get_targets() returns None and throws a confusing warning message.
The reason why no Target is returned is because we don't have the following parameters that are required for the Target constructor:

provider_id: str = "",
input_data_format: str = "",
output_data_format: str = "",
content_type: str = "",
encoding: str = "",

Without this information a Target instance cannot be used to submit a Job.

MicrosoftTeams-image (2)

Proposed solution: If no default target is specified, return a Target instance that will fail if you call .submit().

Thanks @xinyi-joffre for catching this.

Playback recording does not record fetched job results, stores secrets

When using playback recording, the fetched job results are not recorded and returned during playback. Also, it looks like the recording currently contains secrets, and not all sub ID occurrences are replaced by the dummy job ID (0000-...).

@vxfield investigated and the issue is twofold:

  1. The content-range header is missing from the CustomRecordingProcessor allow list
  2. Interactive credentials adds additional secrets.

The proposed solution is to add the missing header, add sanitization steps for the output YAML in the regex_replacer, and add a new InteractiveAccessTokenReplacer class that replaces any tokens generated by the interactive credential.

Authentication issue with MSAL Credentials on Windows: ImportError: DLL load failed while importing win32file: The specified module could not be found.

Summary

On Windows, when trying to import the azure.quantum module (for example: from azure.quantum import Workspace) it fails with ImportError: DLL load failed while importing win32file: The specified module could not be found..

Details

The import fails when trying to use SharedTokenCacheCredential and internally importing the MSAL-Extensions and PortaLocker packages with a stack-trace ending like the following:

File "C:\Users\username\Miniconda3\envs\qiot\lib\site-packages\portalocker\portalocker.py", line 9, in <module>
    import win32file
ImportError: DLL load failed while importing win32file: The specified module could not be found.

Cause

You likely have an older version of azure-identity and pywin32 packages that has a bug.

Solution

In the July 2021 release we will update the dependencies of the azure-quantum package to require minimum versions of our dependencies.

Workaround

Make sure to install the latest azure-identity package in your environment.

pip install azure-identity --upgrade

Cannot submit jobs from from azure-quantum version 0.15.2101.126940

pip install azure-quantum==0.15.2101.126940

0.15.2101.126940 is currently the latest version.

Run any optimization job, for example:

from azure.quantum import Workspace
# Provide subscription_id, resource_group, and workspace_name below
workspace = Workspace(subscription_id=subscription_id, resource_group=resource_group, name=workspace_name)
workspace.login()

from azure.quantum.optimization import Problem, ProblemType, Term,  SimulatedAnnealing

problem = Problem(name="Test Problem", problem_type=ProblemType.ising)
terms = [
    Term(w=-9, indices=[0]),
    Term(w=-3, indices=[1,0]),
]
problem.add_terms(terms=terms)
solver = SimulatedAnnealing(workspace, timeout=100)
result = solver.optimize(problem)
print(result)

It fails with the following error:

Traceback (most recent call last):
  File "c:/src/qio/qio-simple.py", line 14, in <module>
    result = solver.optimize(problem)
  File "C:\Users\<user>\AppData\Local\Programs\Python\Python38-32\lib\site-packages\azure\quantum\optimization\solvers.py", line 107, in optimize
    job = self.submit(problem)
  File "C:\Users\<user>\AppData\Local\Programs\Python\Python38-32\lib\site-packages\azure\quantum\optimization\solvers.py", line 64, in submit
    container_uri = self.workspace._get_linked_storage_sas_uri(container_name)
  File "C:\Users\<user>\AppData\Local\Programs\Python\Python38-32\lib\site-packages\azure\quantum\workspace.py", line 323, in _get_linked_storage_sas_uri
    client = self._create_workspace_storage_client()
  File "C:\Users\<user>\AppData\Local\Programs\Python\Python38-32\lib\site-packages\azure\quantum\workspace.py", line 311, in _create_workspace_storage_client
    client = self._create_client().storage
  File "C:\Users\<user>\AppData\Local\Programs\Python\Python38-32\lib\site-packages\azure\quantum\workspace.py", line 303, in _create_client
    client = QuantumClient(auth, self.subscription_id, self.resource_group, self.name, base_url)
TypeError: __init__() takes from 4 to 5 positional arguments but 6 were given

A workaround is to install the previous version: pip install azure-quantum==0.15.2101.125897

Service Principal auth broken in latest azure-quantum

Using the latest version of the azure-quantum package and following the official service principal auth instructions shows that auth is currently broken. Attempting to authenticate yields the following exception:

  File "\Python\Python36\lib\site-packages\msrest\pipeline\__init__.py", line 197, in run
    return first_node.send(pipeline_request, **kwargs)  # type: ignore
  File "\Python\Python36\lib\site-packages\msrest\pipeline\__init__.py", line 150, in send
    response = self.next.send(request, **kwargs)
  File "\Python\Python36\lib\site-packages\msrest\pipeline\requests.py", line 68, in send
    request.context.session = session = self._creds.signed_session()
  File "\Python\Python36\lib\site-packages\msrest\authentication.py", line 118, in signed_session
    header = "{} {}".format(self.scheme, self.token['access_token'])
TypeError: 'ServicePrincipalCredentials' object is not subscriptable

It seems that this is also using an old pattern, and that Azure SDKs are moving to the azure-identity package instead of azure-common.

Bug: Qiskit results are overwritten when using helper qubits

The current implementation of azure.quantum.qiskit contains a bug: if not all qubits are measured, the resulting histogram counts are incorrect.

The bug is in this line.

This causes for the results returned for different bitstrings that have the same readout in common for the measured qubits to get overwritten, which means that the total number of counts is incorrect.

Steps to reproduce

circuit = QuantumCircuit(4, 3)
circuit.name = "Qiskit Sample - 3-qubit GHZ circuit"
circuit.h(0)
circuit.cx(0, 1)
circuit.cx(1, 2)
circuit.h(3) # Dummy helper qubit that is not measured
circuit.measure([0,1,2], [0, 1, 2])

provider = AzureQuantumProvider(...)
backend = provider.get_backend("ionq.simulator")
job = backend.run(circuit=circuit, shots=1e3)
result = job.result()

assert assert result.data()["counts"] == {
    '000': 500, '111': 500
}

This raises:

E               AssertionError: assert {'000': 125, '111': 125} == {'000': 250, '111': 250}
E                 Differing items:
E                 {'000': 125} != {'000': 250}
E                 {'111': 125} != {'111': 250}

Support the submission of Q# projects with `azure.quantum` API

I want to propose an enhancement to the azure.quantum python API: support Q# Jobs submission to Azure Quantum.

Currently, the official ways to submit Q# jobs to Azure Quantum are:

  1. Python: use the qsharp python package
  2. Jupyter Notebook: %azure.execute magic command
  3. CLI: az quantum

1 and 2 use the same technology (IQ#), while 3 is written in Python:
https://github.com/Azure/azure-cli-extensions/tree/main/src/quantum

The proposal is to take the relevant parts from az quantum Python code and add them to azure.quantum (qdk-python).
https://github.com/Azure/azure-cli-extensions/blob/3b632bb2e04244d662ca724c88da0b8034d5ef60/src/quantum/azext_quantum/operations/job.py#L85-L86
https://github.com/Azure/azure-cli-extensions/blob/3b632bb2e04244d662ca724c88da0b8034d5ef60/src/quantum/azext_quantum/operations/job.py#L162-L163

The azure.quantum Python API is more natural in Python code (does not require Jupyter server with an IQ# kernel) and can manage multiple accounts simultaneously (unlike az CLI).

Naively, all needed is to change the submit method of the QsharpJob class (which will inherit from Job class).

Cannot list jobs with FilteredJob

Filtering jobs by time created throws TypeError: 'tzinfo' is an invalid keyword argument for replace()

I reproduced the error in a UT in this branch: https://github.com/microsoft/qdk-python/tree/guenp/fix-filtered-job

cc @anpaz @adelebai

________________________________________________________________________________________________________________________ TestJob.test_filtered_job ________________________________________________________________________________________________________________________

self = <test_job.TestJob testMethod=test_filtered_job>

    def test_filtered_job(self):
        from datetime import date, timedelta
        from azure.quantum._client.models import JobStatus
    
        yesterday = date.today() - timedelta(days = 1)
        workspace = self.create_workspace()
>       jobs = workspace.list_jobs(created_after=yesterday)

tests/unit/test_job.py:134: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
azure/quantum/workspace.py:254: in list_jobs
    if deserialized_job.matches_filter(name_match, status, created_after):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <azure.quantum.job.Job object at 0x7f437f7f2fd0>, name_match = None, status = None, created_after = datetime.date(2021, 7, 15)

    def matches_filter(
        self,
        name_match: str = None,
        status:  Optional[JobStatus] = None,
        created_after: Optional[datetime] = None
    ) -> bool:
        """Checks if job (self) matches the given properties if any.
            :param name_match: regex expression for job name matching
            :param status: filter by job status
            :param created_after: filter jobs after time of job creation
        """
        if name_match is not None and re.search(name_match, self.details.name) is None:
           return False
    
        if status is not None and self.details.status != status.value:
            return False
    
>       if created_after is not None and self.details.creation_time.replace(tzinfo=timezone.utc) < created_after.replace(tzinfo=timezone.utc):
E       TypeError: 'tzinfo' is an invalid keyword argument for replace()

azure/quantum/job.py:122: TypeError

Add support for BackendProperties to Qiskit backends

Qiskit supports a BackendProperties model that includes information about the quantum device, such as numbers of qubits and T1/T2. It would be great to support that here for the IonQ and Honeywell backends.

More info here:
https://qiskit.org/documentation/stubs/qiskit.providers.models.BackendProperties.html

Azure Quantum device specs can be found here:
https://docs.microsoft.com/en-us/azure/quantum/provider-ionq#quantum-computer
https://docs.microsoft.com/en-us/azure/quantum/provider-honeywell#honeywell-system-model-h1

Workspace does not handle token refresh automatically

When running a script that runs for 1hr+ invariably the short-lived access token will expire. When this happens the user will receive an Unauthorized error and needs to call workspace.login() again.

This is not ideal behavior - instead, the SDK should transparently update it's internal credential before it expires so that users don't need to handle this scenario.

Cirq Job __repr__ method

@vtomole pointed out that the Job.__repr__ method should be modified according to the following:

The class has a repr method that produces a python expression that evaluates to an object equal to the original value. The expression assumes that cirq, sympy, numpy as np, and pandas as pd have been imported.

If the repr is cumbersome, gates should specify a repr_pretty method. This method will be used preferentially by Jupyter notebooks, ipython, etc.

See https://github.com/quantumlib/Cirq/blob/51b56288fa9a84dff9697524da9ab0a4d57a56f5/docs/dev/gates.md#gate-and-operation-guidelines

@vtomole, I was wondering if the above is a correct recap of our conversation, could you please verify? Also, is it a correct observation that cirq_ionq.Job is similarly missing a __repr__ method? See: https://github.com/quantumlib/Cirq/blob/1f14edf6bae39b4146e3f25f04ca4f26effc6773/cirq-ionq/cirq_ionq/job.py#L247

async operations not exposed via workspace API

The azure SDK has async methods for all of the operations for the Quantum API.

It would be really handy to have async methods as this is the type of work you'd want to await upon.

I'll raise a PR

azure-quantum authentication get_token failed messages

When authenticating with azure-quantum and using the default DefaultAzureCredential, it will try several authentication mechanisms as described here.

Even if one of the authentication methods happen to succeed, you will see the messages for the authentication methods that didn't succeed before the final (successful) authentication method was attempted.

So you may see a message like this, even if the authentication succeeded.

EnvironmentCredential.get_token failed: EnvironmentCredential authentication unavailable. Environment variables are not fully configured.
ManagedIdentityCredential.get_token failed: ManagedIdentityCredential authentication unavailable, no managed identity endpoint found.
SharedTokenCacheCredential.get_token failed: SharedTokenCacheCredential authentication unavailable. No accounts were found in the cache.
VisualStudioCodeCredential.get_token failed: Failed to get Azure user details from Visual Studio Code.
AzureCliCredential.get_token failed: Please run 'az login' to set up an account

This is the current behavior of the Azure.Identity.DefaultAzureCredential and a corresponding issue was filled to the Azure SDK team.

Meanwhile, please ignore the get_token failed messages if the authentication succeeded (i.e. you were able to submit jobs to Azure Quantum, etc.).

If all authentication methods have failed, then you may pay close attention to all attempts that have failed and see what was missing to the method that you were interested in.

Blob upload and job creation should be consolidated to new Target class

The following methods are used to upload a blob to a container in Azure Storage and create and submit a Job:

It would be cleaner and easier to use if the above methods were consolidated and implemented on the Job class instead.

@vxfield @anpaz

Workspace doesn't reuse the http session between calls

The QuantumClient is intended to be used as a context manager, whilst currently it is not used that way:

https://github.com/microsoft/qdk-python/blob/main/azure-quantum/azure/quantum/workspace.py#L177-L199

    def _create_client(self) -> QuantumClient:
        base_url = BASE_URL(self.location)
        logger.debug(
            f"Creating client for: subs:{self.subscription_id},"
            + f"rg={self.resource_group}, ws={self.name}, frontdoor={base_url}"
        )

        client = QuantumClient(
            credential=self.credentials,
            subscription_id=self.subscription_id,
            resource_group_name=self.resource_group,
            workspace_name=self.name,
            base_url=base_url,
        )
        return client

    def _create_jobs_client(self) -> JobsOperations:
        client = self._create_client().jobs
        return client

    def _create_workspace_storage_client(self) -> StorageOperations:
        client = self._create_client().storage
        return client

Refactoring this may improve execution times.

Thanks @tonybaloney for flagging.

Cannot use job.result() after fetching job from AzureQuantumProvider.get_job

provider= AzureQuantumProvider(resource_id="", location="")
job = provider.get_job(job_id)
job.result()
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
/tmp/ipykernel_62/168863067.py in <module>
----> 1 job.result()

~/.local/lib/python3.7/site-packages/azure/quantum/plugins/qiskit/job.py in result(self, timeout)
     74 
     75         success = self._azure_job.details.status == "Succeeded"
---> 76         results = self._format_results()
     77 
     78         return Result.from_dict(

~/.local/lib/python3.7/site-packages/azure/quantum/plugins/qiskit/job.py in _format_results(self)
    117 
    118             elif (self._azure_job.details.output_data_format == IONQ_OUTPUT_DATA_FORMAT):
--> 119                 job_result["data"] = self._format_ionq_results()
    120                 job_result["header"] = self._azure_job.details.metadata
    121 

~/.local/lib/python3.7/site-packages/azure/quantum/plugins/qiskit/job.py in _format_ionq_results(self)
    137         az_result = self._azure_job.get_results()
    138         shots = int(self._azure_job.details.input_params['shots']) if 'shots' in self._azure_job.details.input_params else self._backend.options.get('shots')
--> 139         meas_map = self.metadata["metadata"]["meas_map"]
    140         num_qubits = self.metadata["metadata"]["num_qubits"]
    141 

KeyError: 'metadata'

Job name not the same as circuit name from QuantumCircuit call

When submitting a qiskit QuantumCircuit like created like this:

qc = QuantumCircuit(name="my-circuit-name")

The job name on the Job Management portal shows a generic circuit name like circuit-491 instead of using my circuit name.

The version of azure-quantum is 0.21.2111.177148, and the version of qiskit is 0.32.1.

The target `microsoft.simulator.fullstate` is missing

I have preview access to MS full-state cloud simulator but it's not listed in the Python API (azure.quantum).
When I use az quantum CLI it is listed.


  1. Azure CLI (after logging in and setting the workspace):

    az quantum target list
    [
      {
        "currentAvailability": "Available",
        "id": "Microsoft.Simulator",
        "targets": [
          {
            "averageQueueTime": 0,
            "currentAvailability": "Available",
            "id": "microsoft.simulator.fullstate",
            "statusPage": null
          }
        ]
      },
      {
        "currentAvailability": "Available",
        "id": "ionq",
        "targets": [
          {
            "averageQueueTime": 102,
            "currentAvailability": "Available",
            "id": "ionq.qpu",
            "statusPage": "https://status.ionq.co"
          },
          {
            "averageQueueTime": 2,
            "currentAvailability": "Available",
            "id": "ionq.simulator",
            "statusPage": "https://status.ionq.co"
          }
        ]
      }
    ]
  2. When using the Python API (qdk-python), the target microsoft.simulator.fullstate is missing:

    from pprint import pprint
    
    from azure.quantum.workspace import Workspace
    
    ws = Workspace(resource_id="<my_resource_id>", location="<my_ws_location>")
    pprint(ws.get_targets())
    [<Target name="ionq.qpu", avg. queue time=172 s, Available>,
     <Target name="ionq.simulator", avg. queue time=2 s, Available>]
    
  3. This is the providers list of the quantum workspace from Azure portal:
    image
    The target microsoft.simulator.fullstate is listed.

I verified that both methods (az quantum CLI and azure.quantum python package) access the same workspace by comparing the "id" from az quantum workspace show with ws.subscription_id, ws.resource_group, and ws.name.


Proposed solutions:

  1. This issue is a consequence of the following filtering in TargetFactory:
    https://github.com/microsoft/qdk-python/blob/f78537b52f7c478eedc77ea8277dd94212d85950/azure-quantum/azure/quantum/target/target_factory.py#L59-L67
    And the current absence of the microsoft.simulator.fullstate target from:
    https://github.com/microsoft/qdk-python/tree/main/azure-quantum/azure/quantum/target/microsoft
    So we can add it there.

  2. Expose TargetFactory's all_targets parameter
    https://github.com/microsoft/qdk-python/blob/f78537b52f7c478eedc77ea8277dd94212d85950/azure-quantum/azure/quantum/target/target_factory.py#L35
    to Workspace's get_targets method:
    https://github.com/microsoft/qdk-python/blob/f78537b52f7c478eedc77ea8277dd94212d85950/azure-quantum/azure/quantum/workspace.py#L314

Serialize the Optimization `Problem` name

Short descriptions

When you serialize an Optimization Problem object the name is not also serialized. This would be useful for customers who want to save a problem they've created locally and reuse it later without having to specify the problem name again.

Repro steps

  1. Create a problem and serialize it:
problem = Problem(name="myProblem")
problem.terms = [
    Term(c=3, indices=[1, 0]),
    Term(c=5, indices=[2, 0]),
]
serialized_problem = problem.serialize()

Observed behavior

The value of serialized_problem does not contain the name of the problem ("myProblem" in this case):

{
  "cost_function": {
    "version": "1.0",
    "type": "ising",
    "terms": [
      {
        "c": 3,
        "ids": [
          1,
          0
        ]
      },
      {
        "c": 5,
        "ids": [
          2,
          0
        ]
      }
    ]
  }
}

That forces the user to specify the problem name again when deserializing the problem:

deserialized_problem = Problem.deserialize(problem_as_json=serialized_problem,
                                           name="myProblem")

Expected behavior

When serializing the problem, the name should also be serialized, like the example below:

{
  "metadata": {
    "name": "myProblem"
  },
  "cost_function": {
    "version": "1.0",
    "type": "ising",
    "terms": [
      {
        "c": 3,
        "ids": [
          1,
          0
        ]
      },
      {
        "c": 5,
        "ids": [
          2,
          0
        ]
      }
    ]
  }
}

And then, the user does not need to specify the problem name again when deserializing the problem:

deserialized_problem = Problem.deserialize(problem_as_json=serialized_problem)

Backward compatibility

Still, in case we are deserializing a problem that does not contain the name of the problem, the user should be able to pass a problem name again or, if not passed, we should have a default problem name set.

deserialized_problem = Problem.deserialize(problem_as_json=serialized_problem,
                                           name="myProblem")

azure-quantum authentication issue when using a personal/MSA account and the interactive browser login

Version

azure-quantum version >= 0.17.2105.143879 (May 2021 release)

Issue reported on Azure SDK for Python

Azure/azure-sdk-for-python#18975

Repro steps:

  1. Make sure that you are not signed-in with Visual Studio or Azure CLI
    a) Sign-out of Visual Studio
    b) Run az logout

  2. Install pre-reqs
    a) pip install azure-quantum -–upgrade

  3. Create a workspace and submit a minimal problem

from azure.quantum import Workspace

# Copy the settings for your workspace below
workspace = Workspace(
    resource_id = "/subscriptions/<subscription id>/resourceGroups/<resource group>/providers/Microsoft.Quantum/Workspaces/<workspace name>", 
    location = "eastus"
)

from azure.quantum.optimization import Problem, ProblemType
from azure.quantum.optimization import ParallelTempering

solver = ParallelTempering(workspace, timeout=100) # timeout in seconds
problem = Problem(name="My Problem", problem_type=ProblemType.ising)
result = solver.optimize(problem)
  1. Attempt to login using a personal/MSA account
    As you call solver.optimize() it will attempt to connect to the Azure Storage and Azure Quantum service, starting with an attempt to authenticate.
    By default, if you do not provide a credential to the workspace, it will use the DefaultAzureCredential which will attempt several forms of authentication, including the InteractiveBrowserCredential as the last step. All the other forms of authentication must fail in order for the browser authentication to be attempted (and we need that to repro this issue).
    When the browser authentication opens a new https://login.microsoftonline.com/ web page, try to login with a personal/MSA account such as [email protected]. A work/school account should authenticate just fine.
    In the case of a personal/MSA account you should see the error message:

Error message:

User account '[email protected]' from identity provider 'live.com' does not exist in tenant 'Microsoft Services' and cannot access the application '04b07795-8ddb-461a-bbee-02f9e1bf7b46'(Microsoft Azure CLI) in that tenant. The account needs to be added as an external user in the tenant first. Sign out and sign in again with a different Azure Active Directory user account.

Workarounds:

Option 1: az login
a) Before running your Python program, run az login in a terminal/command-line to authenticate interactively. Then the DefaultAzureCredential will use the AzureCliCredential to authenticate automatically with the same credentials that you used during az login.
b) When you run solver.optimize() the InteractiveBrowserCredential will not be attempted anymore and you won't see a login page because the AzureCliCredential should succeed.

Option 2: Pass the tenant_id via environment variable
a) Find your account tenant_id: How to find your Azure Active Directory tenant ID
b) Before running your Python program, set the AZURE_TENANT_ID environment variable with your tenant_id value.
PowerShell example:

$env:AZURE_TENANT_ID = "your tenant id"

c) When you run solver.optimize() the browser authentication should open a new https://login.microsoftonline.com/ web page, and you should be able to login with a personal/MSA account such as [email protected].

Option 3: Pass the tenant_id via the credential parameter
a) Find your account tenant_id: How to find your Azure Active Directory tenant ID
b) Pass your tenant_id as part of the DefaultAzureCredential when creating the workspace:

from azure.quantum import Workspace
from azure.identity import DefaultAzureCredential

credential = DefaultAzureCredential(exclude_interactive_browser_credential=False,
                                    interactive_browser_tenant_id="your tenant id")

# Copy the settings for your workspace below
workspace = Workspace(
    resource_id = "/subscriptions/<subscription id>/resourceGroups/<resource group>/providers/Microsoft.Quantum/Workspaces/<workspace name>", 
    location = "eastus",
    credential=credential
)

from azure.quantum.optimization import Problem, ProblemType
from azure.quantum.optimization import ParallelTempering

solver = ParallelTempering(workspace, timeout=100) # timeout in seconds
problem = Problem(name="My Problem", problem_type=ProblemType.ising)
result = solver.optimize(problem)

c) When you run solver.optimize() the browser authentication should open a new https://login.microsoftonline.com/ web page, and you should be able to login with a personal/MSA account such as [email protected].

Mocks should be used within scope

Some tests use Mocks, but override existing functions in the package, which causes other tests that need the original function to fail in test recording.

For example:
https://github.com/microsoft/qdk-python/blob/main/azure-quantum/tests/unit/test_solvers.py#L27

azure.quantum.optimization.problem.upload_blob = Mock()
job = self.testsolver.submit(problem)

To fix this, we need to use a mock within a scope, e.g.:

with patch("azure.quantum.job.base_job.upload_blob") as mock_upload:
    job = self.testsolver.submit(problem)

ClientAuthenticationError when fetching job results with Workspace.list_jobs

Bug

When fetching the job results after getting a job via Workspace.list_jobs, the client throws an error:

jobs = workspace.list_jobs("My_Job_")
job = jobs[0]
job.get_results()
ClientAuthenticationError: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:43efab21-401e-0026-36f1-e4a06d000000
Time:2021-11-29T07:21:40.4146303Z
ErrorCode:AuthenticationFailed
authenticationerrordetail:Signed expiry time [Sat, 20 Nov 2021 02:46:49 GMT] has to be after signed start time [Mon, 29 Nov 2021 07:21:40 GMT]

Workaround

Run job.refresh() before running job.get_results()

Cannot submit list of circuits via AzureQuantumBackend.run

Currently, to submit multiple circuits, users have to submit each circuit individually, e.g.

jobs = []
for circuit in circuits:
    jobs.append(backend.run(circuit, shots=N))

results = []
for job in jobs:
    results.append(job.result())

Iterative algorithms require being able to submit a batch of circuits at the same time. There are several Qiskit libraries that depend on support for backend.run(circuits: List[QuantumCircuit]), such as qiskit-experiments and qiskit-aqua. See also: https://qiskit.org/documentation/stubs/qiskit.providers.ibmq.managed.IBMQJobManager.html#qiskit.providers.ibmq.managed.IBMQJobManager

This would require being able to submit multiple circuits and get a single job ID back to fetch the results, similar to the IBMQ experience. However, this is currently not something our service supports; Azure Quantum's Backend currently only accepts a single circuit or a list of circuits of length one, see this line: ionq.py#L70.

For instance, Tomography uses the backend.run() method under the hood which returns a single job: base_experiment.py#L345.

Not clear from README.md that qdk requires rdkit install via Conda

It's not immediately clear from the main README.md file that you need to install rdkit before running the qdk.chemistry example.
It would be useful if there is either a link to the qdk/README.md file or if the requirement is listed there in the "how to install" section.

Not clear how to estimate job costs

The costs for submitting jobs to a QPU are currently not transparent. It would be great if there were a way to easily estimate the costs for a circuit.

For Honeywell QPU's, the costs are recorded in "HQCs", Honeywell Quantum Credits, as follows:

HQC = 5 + C(N_1q + 10 N_2q + 5 N_m)/5000

where N_1q is the number of 1-qubit gates, N_2q is the number of 2-qubit gates, N_m is the number of state prep and measurement operations, and C is the shot count.

For IonQ QPUs, costs are calculated per gate-shot: the number of gates in the circuit, multiplied by the number of shots.

cost = ($0.00003 * N_1q + $0.0003 * (N_2q + 6 * (N_multi - 2)) * num_shots

with a minimum cost of $1
where N_1q is the number of one-qubit gates, N_2q is the number of two-qubit gates, N_multi is the number of qubits involved in a multi-controlled gate, num_shots is the number of shots

DefaultAzureCredential prints libsecret errors in linux container from SharedTokenCacheCredential

Related to: Azure/azure-sdk-for-python#19857

The custom _DefaultAzureCredential() that we use as a default credential for the Workspace has an issue with the SharedTokenCacheCredential in a linux docker container.
It dumps either pyobject or libsecret errors from the SharedTokenCacheCredential, which is confusing to users.

It also looks like DotNet Azure.Identity excludes SharedTokenCacheCredential by default. For consistency across SDKs, it would also be great if azure-sdk-for-python similarly excluded SharedTokenCacheCredential from DefaultAzureCredential, since it really only works on Windows, and seems to cause issues on other systems.

For consistency, we should exclude the SharedTokenCacheCredential from our default credential.

Merged PR for azure-sdk-for-net to remove SharedTokenCacheCredential from default: Azure/azure-sdk-for-net#16615
Related Issue: Azure/azure-sdk-for-net#17052

Inconsistency with Cirq for pulling back existing job results from different providers

Consistent behavior for service.run() job

When using service.run(), we see that the resulting output is consistent across providers:

IonQ

image

Honeywell

image

Calling service.run against different providers both return a CirqResult.

Inconsistent behavior for job.results() of existing job

However, when pulling back an existing job, we see inconsistent behavior:
image

For IonQ and Honeywell, job.results() returns the Raw Result format from the provider, rather than a consistent microsoft Result format that contains counts/probabilities.

In addition, it does not seem possible to get existing jobs back into the same format as if I were calling service.run(). For IonQ, I can use to_cirq_result to get back the result similar to when I called service.run().

If trying the same thing for honeywell, I get an error that dict object has no attribute to_cirq_result:
image

This means that even though honeywell service.run() returned a CirqResult, it is not possible to use similar method to get out a CirqResult from an existing job.

Azure Quantum workspace.login(refresh=True) inadvertently removes ServicePrincipalCredentials passed in the workspace.credentials

Azure Quantum workspace.login(refresh=True) inadvertently removes ServicePrincipalCredentials passed in the workspace.credentials

Workaround:
For customers that are passing ServicePrincipalCredentials in workspace.credentials, simply do not call workspace.login(refresh=True) as that will remove the workspace.credentials that you previously set.
You should simply use workspace.login() which defaults to workspace.login(refresh=False)

Integrate async i/o tests and validation into CI/CD pipeline

Currently, async I/O functionality is only tested in recording. We need to enable them for live tests, both the ones run in the azure-quantum-clients pipeline, but also the ones run in the iqsharp Docker image.

We should also add a validation step to our CI/CD pipeline that tests for any changes to the code that need to be propagated to the async package.

Documentation

Add docstrings to all public functions and methods and make sure they are Sphinx/docgen compatible.
Add links to public documentation where appropriate.

"Finishing" job status is missing from the REST API

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.