Git Product home page Git Product logo

daptics-api's Introduction

daptics API

This is the README documentation for the daptics Design of Experiments GraphQL API and the Python API client.

Links

To use the daptics API, you must first register at the https://daptics.ai to establish your login and password for API authentication.

Python Client

The python_client folder contains the Python GraphQL client package module sources (in the daptics_client, phoenix, and syncasync folders), and several interactive Python notebooks for experimenting with the API.

Follow the instructions in the README-NOTEBOOKS.md file in that folder to set up a local Jupyter Notebook server if you do not have access to a server that can open .ipynb files.

GraphQL API Documentation

  1. Install graphql-markdown

  2. Then in the pydocmd folder, run:

NODE_TLS_REJECT_UNAUTHORIZED=0 graphql-markdown --no-toc --title 'Daptics GraphQL API' https://api.daptics.ai/api >graphql_api.md

Notes:

  1. The NODE_TLS_REJECT_UNAUTHORIZED=0 is to handle our ZeroSSL certificates.

  2. If this fails, you can use the appropriate JSON schema file to generate the docs, with this command:

graphql-markdown --no-toc --title 'Daptics GraphQL API' api-0.14.1.json >graphql_api.md
  1. If using yarn to manage node packages, try this:
yarn bin graphql-markdown

to get the location of the graphql-markdown executable.

Python Client Documentation and MkDocs Build

  1. Install these tools in order:

    a. tornado - Important! Specify version 5.1.1 (version 6.0 will break MkDocs) b. pdoc3 c. MkDocs d. mkdocs-rtd-dropdown - Theme for MkDocs

  2. Create Markdown documentation for the daptics_client.py file using pdoc3. In the python_client folder, run: pdoc --pdf --force --template-dir ../pdoc/templates daptics_client >../pydocmd/daptics_client.md

  3. Then build the entire "Read the Docs" site, using mkdocs. In the root project folder, where the mkdocs.yml configuration file is located, run: mkdocs build

Html and Markdown files will be produced in the docs folder.

Using Jupytext to Extract and Sync to Python Source Files

  1. Install jupytext

  2. Set up metadata in any ipynb file that has Python code: jupytext --set-formats ipynb,python//py:light 03_SimpleTutorial.ipynb

  3. Export Python code to /python subdirectory: jupytext --from ipynb --to python//py:light 03_SimpleTutorial.ipynb

  4. Edit Python code as needed. Or when you run in Jupyter notebook, and make changes, the corresponding Python file will be kept up to date!

  5. Rebuild notebook from Python file without outputs (do this before checking into version control): jupytext --from python//py:light --to notebook python/03_SimpleTutorial.py

You can also use jupyter nbconvert to remove all output from .ipynb files:

jupyter nbconvert 03_SimpleTutorial.ipynb --to notebook --ClearOutputPreprocessor.enabled=True --inplace

Automated Release Notes by gren

daptics-api's People

Contributors

nhpackard avatar pzingg avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Forkers

3852-ai

daptics-api's Issues

ipynb files don't display in github

In python_client there are the ipynb files, e.g. 01_README.ipynb, etc. Normally these are displayed by github (in non-executable form).

For some reason, I can't get them to display, get message

Sorry, something went wrong. Reload?

Wonder what's up with that?

Loading those ipynb files might be a good proxy for documentation directly available on the repo.

`simulate_experiment_responses` does not install new responses.

(cf. #32)

To generate simulated responses and export them I execute:

space = daptics.get_experimental_space()
design = daptics.design
experiments = daptics.random_experiments_with_responses(space, design)
expsim = daptics.simulate_experiment_responses(experiments)
daptics.export_csv(csv_experiments_file, expsim, True)

This works to the extent that csv_experiments_file has the right simulated results

But then when I request next gen design with

newgen = daptics.put_experiments_csv(
    DapticsExperimentsType.DESIGNED_WITH_OPTIONAL_EXTRAS,
    csv_experiments_file)

I get error

Task failed with error(s)!  Messages are:
[0] category:	execution
[0] fatalError:	None
[0] message:	Experiments in Experiments1.csv do not match those of generation 1
[0] systemError:	None

My diagnosis is that the call daptics.random_experiments_with_responses installs the random experiments in Experiments1.csv, and then there is a skew between those and the simulated response experiments in csv_experiments_file.

Proposed solution: dispense with current daptics.simulate_experiment_responses, create new daptics.simulated_experiments_with_responses that is exactly like daptics.random_experiments_with_responses (including installation of Experiments1.csv on backend), except using simulated responses instead of random.

Actually better names would be:

daptics.experiments_with_random_responses
daptics.experiments_with_simulated_responses

I will work on this in new branch...

can't connect to api.daptics.api/api


HTTPError Traceback (most recent call last)
in
27 # The 'connect' method will connect to the API server and obtain the
28 # GraphQL schema.
---> 29 daptics.connect()

~/Projects/daptics-api/python_client/daptics_client/daptics_client.py in connect(self)
1036 verify=self.SSL_CERT_VERIFICATION)
1037 self.gql = gql.Client(
-> 1038 transport=http, fetch_schema_from_transport=True)
1039
1040 compat = self.check_api_compatibility()

~/.pyenv/versions/3.7.2/lib/python3.7/site-packages/gql/client.py in init(self, schema, introspection, type_def, transport, fetch_schema_from_transport, retries)
35 not schema
36 ), "Cannot fetch the schema from transport if is already provided"
---> 37 introspection = transport.execute(parse(introspection_query)).data
38 if introspection:
39 assert (

~/.pyenv/versions/3.7.2/lib/python3.7/site-packages/gql/transport/requests.py in execute(self, document, variable_values, operation_name, timeout)
122
123 if "errors" not in result and "data" not in result:
--> 124 response.raise_for_status()
125 raise requests.HTTPError(
126 "Server did not return a GraphQL result", response=response

~/.pyenv/versions/3.7.2/lib/python3.7/site-packages/requests/models.py in raise_for_status(self)
939
940 if http_error_msg:
--> 941 raise HTTPError(http_error_msg, response=self)
942
943 def close(self):

HTTPError: 502 Server Error: Bad Gateway for url: https://api.daptics.ai/api

timeout way too often.

Most often, timeout has been happening on calls to daptics.generate_analytics() (as in example error message below). Has also been seen for daptics.put_experiments_csv() as well.

tracerout could be troublesome:

$ traceroute inertia.daptics.ai
traceroute to inertia.daptics.ai (142.254.64.34), 64 hops max, 52 byte packets
 1  192.168.68.1 (192.168.68.1)  2.600 ms  1.291 ms  1.042 ms
 2  pppoe-server.net.ngi.it (81.174.0.21)  8.761 ms  9.667 ms  10.860 ms
 3  10.222.67.234 (10.222.67.234)  19.655 ms  19.773 ms  19.894 ms
 4  10.40.83.121 (10.40.83.121)  11.396 ms  16.729 ms  10.399 ms
 5  10.40.84.134 (10.40.84.134)  12.910 ms  37.984 ms  11.822 ms
 6  et-1-0-19.edge1.milan1.level3.net (213.249.124.141)  11.434 ms  13.086 ms  21.623 ms
 7  gtt-level3-milan1.level3.net (4.68.39.134)  14.996 ms  9.441 ms  10.862 ms
 8  ae9.cr0-pao1.ip4.gtt.net (89.149.128.238)  177.527 ms *  189.262 ms
 9  as7065.xe-1-0-6.ar1.pao1.us.as4436.gtt.net (69.22.130.86)  172.590 ms  168.864 ms  166.638 ms
10  102.ae1.cr1.pao1.sonic.net (70.36.205.5)  174.767 ms  178.644 ms  173.231 ms
11  0.ae0.cr1.colaca01.sonic.net (70.36.205.62)  171.088 ms  191.917 ms  181.782 ms
12  0.ae0.cr1.snrfca01.sonic.net (157.131.209.82)  178.734 ms
    0.ae2.cr2.colaca01.sonic.net (157.131.209.66)  187.424 ms  189.238 ms
13  0.xe-1-3-0.gw3.snfcca01.sonic.net (142.254.59.26)  178.881 ms
    0.ae2.cr2.snrfca01.sonic.net (157.131.209.170)  189.830 ms
    0.xe-1-3-0.gw3.snfcca01.sonic.net (142.254.59.26)  185.560 ms
14  * 0.xe-1-3-0.gw4.snfcca01.sonic.net (142.254.59.66)  175.978 ms
    0.xe-1-3-1.gw4.snfcca01.sonic.net (142.254.59.70)  173.378 ms
15  * * *
16  * * *
17  * * *
18  * * *
...
64  * * *
  • Why so many sonic.net bounces?
  • Why indefinite number of '* * *'?

The timeout error:

---------------------------------------------------------------------------
TimeoutError                              Traceback (most recent call last)
File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/urllib3/connection.py:169, in HTTPConnection._new_conn(self)
    168 try:
--> 169     conn = connection.create_connection(
    170         (self._dns_host, self.port), self.timeout, **extra_kw
    171     )
    173 except SocketTimeout:

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/urllib3/util/connection.py:96, in create_connection(address, timeout, source_address, socket_options)
     95 if err is not None:
---> 96     raise err
     98 raise socket.error("getaddrinfo returns an empty list")

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/urllib3/util/connection.py:86, in create_connection(address, timeout, source_address, socket_options)
     85     sock.bind(source_address)
---> 86 sock.connect(sa)
     87 return sock

TimeoutError: [Errno 60] Operation timed out

During handling of the above exception, another exception occurred:

NewConnectionError                        Traceback (most recent call last)
File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/urllib3/connectionpool.py:699, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
    698 # Make the request on the httplib connection object.
--> 699 httplib_response = self._make_request(
    700     conn,
    701     method,
    702     url,
    703     timeout=timeout_obj,
    704     body=body,
    705     headers=headers,
    706     chunked=chunked,
    707 )
    709 # If we're going to release the connection in ``finally:``, then
    710 # the response doesn't need to know about the connection. Otherwise
    711 # it will also try to release it and we'll have a double-release
    712 # mess.

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/urllib3/connectionpool.py:382, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
    381 try:
--> 382     self._validate_conn(conn)
    383 except (SocketTimeout, BaseSSLError) as e:
    384     # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/urllib3/connectionpool.py:1010, in HTTPSConnectionPool._validate_conn(self, conn)
   1009 if not getattr(conn, "sock", None):  # AppEngine might not have  `.sock`
-> 1010     conn.connect()
   1012 if not conn.is_verified:

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/urllib3/connection.py:353, in HTTPSConnection.connect(self)
    351 def connect(self):
    352     # Add certificate verification
--> 353     conn = self._new_conn()
    354     hostname = self.host

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/urllib3/connection.py:181, in HTTPConnection._new_conn(self)
    180 except SocketError as e:
--> 181     raise NewConnectionError(
    182         self, "Failed to establish a new connection: %s" % e
    183     )
    185 return conn

NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x10843e910>: Failed to establish a new connection: [Errno 60] Operation timed out

During handling of the above exception, another exception occurred:

MaxRetryError                             Traceback (most recent call last)
File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/requests/adapters.py:439, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
    438 if not chunked:
--> 439     resp = conn.urlopen(
    440         method=request.method,
    441         url=url,
    442         body=request.body,
    443         headers=request.headers,
    444         redirect=False,
    445         assert_same_host=False,
    446         preload_content=False,
    447         decode_content=False,
    448         retries=self.max_retries,
    449         timeout=timeout
    450     )
    452 # Send the request.
    453 else:

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/urllib3/connectionpool.py:755, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
    753     e = ProtocolError("Connection aborted.", e)
--> 755 retries = retries.increment(
    756     method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
    757 )
    758 retries.sleep()

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/urllib3/util/retry.py:574, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
    573 if new_retry.is_exhausted():
--> 574     raise MaxRetryError(_pool, url, error or ResponseError(cause))
    576 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)

MaxRetryError: HTTPSConnectionPool(host='api-files.daptics.ai', port=443): Max retries exceeded with url: /session/S97nfh5bmzf3m2jkzrf5/analytics/gen/3/PredRespProfile2D.pdf?token=QTEyOEdDTQ.fzRqKs95aMkSWaKnTUxdWrC0se_FGz9x62s8S-UpK60sATEBdhanXIoeXVE.iVb7DpOy7QI15LLf.EXN4HFyCgqGtXfa5BA4vzwmRwWxKxpqZ4v008HzJdGfu907KXQAzGRzIBZY1UgwlBlfaTX-8z6nqqznXDj8KeUGKbZaTaOTXLpPnYuBR7RSAfOUOejxEN5Tl7kn9kI13_xBXtzqTZCRqQNKijI-b3f6Y_HUToq-XPo8k7xjNn86x1Vv49DZYrzLKj-Ph9H7W7W5zx34725khBt5KQHBrjxFEM4mgm_xLng.UcB4xy2SIOQ5sR3QM6RxgA (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x10843e910>: Failed to establish a new connection: [Errno 60] Operation timed out'))

During handling of the above exception, another exception occurred:

ConnectionError                           Traceback (most recent call last)
Input In [28], in <cell line: 10>()
      5 print('Generating analytics files.')
      7 # Generate any analytics files that are available for this generation.
      8 # Since the `auto_task_timeout` option has been set, the script will
      9 # block until the files are ready to be downloaded.
---> 10 daptics.generate_analytics()
     12 print('Downloading analytics files.')
     14 # Fetch the PDF analytics files via authenticated HTTP, and save them
     15 # to the './output' directory, where your automation workflow
     16 # software can pick them up.

File ~/Projects/daptics-api/python_client/daptics_client/daptics_client.py:2806, in DapticsClient.generate_analytics(self)
   2804 task_id = data['createAnalytics']['taskId']
   2805 self.task_info[task_id] = data['createAnalytics']
-> 2806 auto_task = self._auto_task()
   2807 if auto_task is not None:
   2808     return {'createAnalytics': auto_task}

File ~/Projects/daptics-api/python_client/daptics_client/daptics_client.py:2730, in DapticsClient._auto_task(self, timeout_override)
   2727 if timeout is None:
   2728     return None
-> 2730 data, errors = self.wait_for_current_task(
   2731     task_type=None, timeout=timeout)
   2732 self._raise_exception_on_error(data, errors)
   2734 return data['currentTask']

File ~/Projects/daptics-api/python_client/daptics_client/daptics_client.py:2680, in DapticsClient.wait_for_current_task(self, task_type, timeout)
   2678 retry = 0
   2679 while True:
-> 2680     data, errors = self.poll_for_current_task(task_type)
   2681     if data and 'currentTask' in data and data['currentTask'] is not None:
   2682         status = data['currentTask']['status']

File ~/Projects/daptics-api/python_client/daptics_client/daptics_client.py:2638, in DapticsClient.poll_for_current_task(self, task_type)
   2636                 self.analytics = result['analytics']
   2637                 if auto_export_path is not None:
-> 2638                     self.download_all_analytics_files(
   2639                         self.analytics, auto_export_path, True)
   2640 else:
   2641     data = {'currentTask': None}

File ~/Projects/daptics-api/python_client/daptics_client/daptics_client.py:2848, in DapticsClient.download_all_analytics_files(self, analytics, directory, name_by_gen)
   2846 if 'url' in file and 'filename' in file:
   2847     url, params = self.download_url_and_params(file['url'])
-> 2848     response = requests.get(url, params=params)
   2849     if response.status_code == requests.codes.ok and response.content is not None:
   2850         if file_count == 0:

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/requests/api.py:76, in get(url, params, **kwargs)
     65 r"""Sends a GET request.
     66 
     67 :param url: URL for the new :class:`Request` object.
   (...)
     72 :rtype: requests.Response
     73 """
     75 kwargs.setdefault('allow_redirects', True)
---> 76 return request('get', url, params=params, **kwargs)

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/requests/api.py:61, in request(method, url, **kwargs)
     57 # By using the 'with' statement we are sure the session is closed, thus we
     58 # avoid leaving sockets open which can trigger a ResourceWarning in some
     59 # cases, and look like a memory leak in others.
     60 with sessions.Session() as session:
---> 61     return session.request(method=method, url=url, **kwargs)

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/requests/sessions.py:542, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
    537 send_kwargs = {
    538     'timeout': timeout,
    539     'allow_redirects': allow_redirects,
    540 }
    541 send_kwargs.update(settings)
--> 542 resp = self.send(prep, **send_kwargs)
    544 return resp

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/requests/sessions.py:655, in Session.send(self, request, **kwargs)
    652 start = preferred_clock()
    654 # Send the request
--> 655 r = adapter.send(request, **kwargs)
    657 # Total elapsed time of the request (approximately)
    658 elapsed = preferred_clock() - start

File ~/.pyenv/versions/3.9.5/lib/python3.9/site-packages/requests/adapters.py:516, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
    512     if isinstance(e.reason, _SSLError):
    513         # This branch is for urllib3 v1.22 and later.
    514         raise SSLError(e, request=request)
--> 516     raise ConnectionError(e, request=request)
    518 except ClosedPoolError as e:
    519     raise ConnectionError(e, request=request)

ConnectionError: HTTPSConnectionPool(host='api-files.daptics.ai', port=443): Max retries exceeded with url: /session/S97nfh5bmzf3m2jkzrf5/analytics/gen/3/PredRespProfile2D.pdf?token=QTEyOEdDTQ.fzRqKs95aMkSWaKnTUxdWrC0se_FGz9x62s8S-UpK60sATEBdhanXIoeXVE.iVb7DpOy7QI15LLf.EXN4HFyCgqGtXfa5BA4vzwmRwWxKxpqZ4v008HzJdGfu907KXQAzGRzIBZY1UgwlBlfaTX-8z6nqqznXDj8KeUGKbZaTaOTXLpPnYuBR7RSAfOUOejxEN5Tl7kn9kI13_xBXtzqTZCRqQNKijI-b3f6Y_HUToq-XPo8k7xjNn86x1Vv49DZYrzLKj-Ph9H7W7W5zx34725khBt5KQHBrjxFEM4mgm_xLng.UcB4xy2SIOQ5sR3QM6RxgA (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x10843e910>: Failed to establish a new connection: [Errno 60] Operation timed out'))

Secure connection for notebook API server

  • 1. The URL scheme for the API server exposed in the notebooks should be https: to protect login credentials and access keys.
  • 2. Also it would be better to use a CNAME for the server hostname, such as api-v0-9.daptics.ai rather than inertia.protolife.com.

bug: login method should raise exception on invalid credentials

[branch v0.12.0]

Trying to execute 03_SimpleTutorial.ipynb

on cell

name = datetime.now().strftime('Practice Session %Y%m%d-%H%M%S')
description = 'This is a practice session'
print('Session name = ',name)
daptics.create_session(name, description);

get:

Problem creating session!
Error: Argument "session" has invalid value $session.
In field "userId": Expected type "String!", found null.
Hint: session name may have already been taken, in which case choose another one.

problem is that $session is not filled in...

daptics.print()
client_version =  0.12.0
host =  https://api.daptics.ai
credentials =  None
options =  {'auto_export_path': None, 'auto_generate_next_design': False, 'auto_task_timeout': None, 'run_tasks_async': False}
user_id =  None
session_id =  None
session_name =  None
session_tag =  None
task_info =  {}
gen =  -1
remaining =  None
completed =  False
Experimental Space Definition: None
Design: None
Experiments History: None
Analytics: None

user_id and other session info are not present, and looked for later.

bug: Error creating analytics files with prohibited characters in parameter names

Hi Daptics,

I have completed a new campaign of 10 generations using Daptics API Version 0.9.3 and I get the following errors (1,2) when runnning "get_all_analytics_files" for session "06bf3gqzmiigwzab7io7":

(1)
failed with error: {'path': ['createAnalytics'], 'message': "In /home/shiny/rservex-sessions/Ditlev_20200412155004.wzn6.local/R , runtime error: Error in FUN(X[[i]], ...): object 'log_k' not found\n", 'locations': [{'line': 2, 'column': 0}]}

(2)
failed with error: {'path': ['createAnalytics'], 'message': 'In /home/shiny/rservex-sessions/Ditlev_20200413104912.uqok.local/R , runtime error: Error in pdf(fileName): too many open devices\n', 'locations': [{'line': 2, 'column': 0}]}

ESD parameter names are:
log_k+O1
log_k-O1
log_k+O2
log_k-O2
log_k+T
log_k-T
log_kL

Could the first error have something to do with the fact that parameter names contain hyphens or plus signs?

EDIT:
Certain special characters (-, +) are not allowed in ESD parameter names, and so I have successfully completed a new campaign after stripping hyphens and plus signs.

Best,
Ditlev

bug: Cannot query field "gen" on type "Task"

Trying to run 06_AutomationWorkflow.ipynb

Seems to login fine, create session ok, but then has a problem finding gen.

NB: the input files input/experimental_space.csv and input/experiments.csv are correctly generated.

Creating the session Automated Workflow 20210429-114322.
Creating the experimental space. This may take a minute or more.
---------------------------------------------------------------------------
GraphQLError                              Traceback (most recent call last)
<ipython-input-2-e69e0a7afe29> in <module>
     42 # each experimental input parameter, and set other meta-parameters,
     43 # to completely initialize the Daptics engine for this campaign.
---> 44 daptics.put_experimental_parameters_csv(csv_space_file, space_params)
     45 
     46 # Because the `auto_task_timeout` option was set, the Python script

~/Projects/daptics-api/python_client/daptics_client/daptics_client.py in put_experimental_parameters_csv(self, fname, params)
   1590             param_rows = [r for r in reader]
   1591         params['space']['table'] = {'data': param_rows}
-> 1592         return self.put_experimental_parameters(params)
   1593 
   1594     def get_experiments(self, design_only=False, gen=None):

~/Projects/daptics-api/python_client/daptics_client/daptics_client.py in put_experimental_parameters(self, params)
   1480             task_id = data['putExperimentalParameters']['taskId']
   1481             self.task_info[task_id] = data['putExperimentalParameters']
-> 1482             auto_task = self._auto_task()
   1483             if auto_task is not None:
   1484                 return {'putExperimentalParameters': auto_task}

~/Projects/daptics-api/python_client/daptics_client/daptics_client.py in _auto_task(self, timeout_override)
   2662 
   2663         data, errors = self.wait_for_current_task(
-> 2664             task_type=None, timeout=timeout)
   2665         self._raise_exception_on_error(data, errors)
   2666 

~/Projects/daptics-api/python_client/daptics_client/daptics_client.py in wait_for_current_task(self, task_type, timeout)
   2611         retry = 0
   2612         while True:
-> 2613             data, errors = self.poll_for_current_task(task_type)
   2614             if data and 'currentTask' in data and data['currentTask'] is not None:
   2615                 status = data['currentTask']['status']

~/Projects/daptics-api/python_client/daptics_client/daptics_client.py in poll_for_current_task(self, task_type)
   2506         """)
   2507 
-> 2508         data, errors = self.call_api(doc, vars)
   2509         if data and 'currentTask' in data and data['currentTask'] is not None:
   2510             if 'status' in data['currentTask'] and 'type' in data['currentTask']:

~/Projects/daptics-api/python_client/daptics_client/daptics_client.py in call_api(self, document, vars, timeout)
    701 
    702         if self.gql.schema:
--> 703             self.gql.validate(document)
    704 
    705         try:

~/.pyenv/versions/3.7.2/lib/python3.7/site-packages/gql/client.py in validate(self, document)
     42         validation_errors = validate(self.schema, document)
     43         if validation_errors:
---> 44             raise validation_errors[0]
     45 
     46     def execute(self, document, *args, **kwargs):

GraphQLError: Cannot query field "gen" on type "Task".

sparsefactorial does not work.

Something about sparseCombMaxSize not being True.

Works in pdt-server dev branch.

Maybe this is an issue for rservex...

verify_ssl_certificates key error

in both dev and main branches, daptics.connect() yields:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-4-9bdca8fbd813> in <module>
      4 # GraphQL schema.
      5 
----> 6 daptics.connect()

~/Projects/daptics-api/python_client/daptics_client/daptics_client.py in connect(self)
   1056                 auth=self.auth,
   1057                 use_json=True,
-> 1058                 verify=self.options['verify_ssl_certificates'])
   1059             self.gql = gql.Client(
   1060                 transport=http, fetch_schema_from_transport=True)

KeyError: 'verify_ssl_certificates'

@nhpackard, @pzingg: I now recall why I previously had to use forking, rather than the usual edit/commit/push, in daptics-api: I don’t have writing permissions for that repo. Could you please grant them to me?

@nhpackard, @pzingg: I now recall why I previously had to use forking, rather than the usual edit/commit/push, in daptics-api: I don’t have writing permissions for that repo. Could you please grant them to me?

Originally posted by @ggazzola in #3 (comment)

Oh man! Sorry about that. Should be fixed.

Document explicitly which functions create tasks

Regardless of option settings, these two methods will create long-running tasks that must be polled or waited on.

  1. put_experimental_parameters (or its csv variant) creates a "space" task
  2. generate_design creates a "generate" task

But if the auto_generate_next_design option is set and there are any remaining generations to be designed in the session, then put_experiments (or its csv variant) will automatically start a "generate" task after it has uploaded the experiments to the server. The task information will then be available in the "task" attribute of the result.

`simulate_experiment_responses` should work with design?

Only way I could get it to work was to first call random_experiments_with_responses to generate experiments, then hand over those experiments to simulate_experiment_responses.

Maybe simulate_experiment_responses should have identical args to random_experiments_with_responses?

Maybe names should also be parallel, e.g.
random_experiments_with_responses
simulated_experiments_with_responses

Cannot save experiments while a update task is active

Occasionally the API hangs with message "Cannot save experiments while a update task is active".

Conditions for recreation of the are currently unclear. I typically can execute a couple generations, then the error appears, and the session cannot move forward.

Error occurs after a call like:

newgen = daptics.put_experiments_csv(
    DapticsExperimentsType.DESIGNED_WITH_OPTIONAL_EXTRAS,
    csv_experiments_file)

but error is also generated by calls like:

# Do the experiment:
# simulated experimental response
space = daptics.get_experimental_space()
design = daptics.design
experiments = daptics.random_experiments_with_responses(space, design)
expsim = daptics.simulate_experiment_responses(experiments)

Curiously, if the last example is executed without the last line, the error is not generated.

The full error is:

---------------------------------------------------------------------------
GraphQLError                              Traceback (most recent call last)
Input In [53], in <cell line: 1>()
----> 1 newgen = daptics.put_experiments_csv(
      2     DapticsExperimentsType.DESIGNED_WITH_OPTIONAL_EXTRAS,
      3     csv_experiments_file)

File ~/Projects/daptics-api/python_client/daptics_client/daptics_client.py:2120, in DapticsClient.put_experiments_csv(self, experiments_type, fname)
   2118 else:
   2119     raise CsvNoDataRowsError(fname)
-> 2120 return self.put_experiments(experiments_type, experiments)

File ~/Projects/daptics-api/python_client/daptics_client/daptics_client.py:2025, in DapticsClient.put_experiments(self, experiments_type, experiments)
   2023 else:
   2024     data, errors = self.call_api(doc, vars)
-> 2025 self._raise_exception_on_error(data, errors)
   2027 if 'putExperiments' in data and data['putExperiments'] is not None:
   2028     task_id = data['putExperiments']['taskId']

File ~/Projects/daptics-api/python_client/daptics_client/daptics_client.py:856, in DapticsClient._raise_exception_on_error(self, data, errors)
    853 if not self._successful(data):
    854     if errors:
    855         # This is what gql does with errors
--> 856         raise GraphQLError(str(errors[0]['message']))
    857     else:
    858         raise GraphQLError('Unknown error')

GraphQLError: Cannot save experiments while a update task is active.

Implement automatic exports for `simulate_experiment_responses` and async tasks?

Should the client export automatically, to a CSV file with a well-known name, like "auto_simulated_.csv", if the auto_export_path option is set? Currently the 'auto_export_path' only works when tasks are polled.

Also, not sure if auto_export_path does anything if the run_tasks_async option is set. Need to test that combination.

api.daptics.ai not responding...

On daptics.connect(), get error:

---------------------------------------------------------------------------
HTTPError                                 Traceback (most recent call last)
<ipython-input-5-eaf59004fbfc> in <module>
      3 # The 'connect' method will connect to the API server and obtain the
      4 # GraphQL schema.
----> 5 daptics.connect()

~/Projects/daptics-api/python_client/daptics_client/daptics_client.py in connect(self)
   1036                 verify=self.SSL_CERT_VERIFICATION)
   1037             self.gql = gql.Client(
-> 1038                 transport=http, fetch_schema_from_transport=True)
   1039 
   1040         compat = self.check_api_compatibility()

~/.pyenv/versions/3.7.2/lib/python3.7/site-packages/gql/client.py in __init__(self, schema, introspection, type_def, transport, fetch_schema_from_transport, retries)
     35                 not schema
     36             ), "Cannot fetch the schema from transport if is already provided"
---> 37             introspection = transport.execute(parse(introspection_query)).data
     38         if introspection:
     39             assert (

~/.pyenv/versions/3.7.2/lib/python3.7/site-packages/gql/transport/requests.py in execute(self, document, variable_values, operation_name, timeout)
    122 
    123         if "errors" not in result and "data" not in result:
--> 124             response.raise_for_status()
    125             raise requests.HTTPError(
    126                 "Server did not return a GraphQL result", response=response

~/.pyenv/versions/3.7.2/lib/python3.7/site-packages/requests/models.py in raise_for_status(self)
    939 
    940         if http_error_msg:
--> 941             raise HTTPError(http_error_msg, response=self)
    942 
    943     def close(self):

HTTPError: 502 Server Error: Bad Gateway for url: https://api.daptics.ai/api

Checked to see that I can ping api.daptics.ai fine. Tried opening VPN, no joy.

doc: Notebook and documentation improvements

This is the main TODO list, and refers to the steps in the 03_SimpleTutorial notebook. Details and discussion on the items in the list are in subsequent comments.

General

  • Review minor documentation edits here.

  • Omit all print commands that are currently used for "debugging" purposes (e.g., in Steps 1,2,3, etc.) and replace them with a validation/test within an existing function (e.g., have daptics.connect() check if daptics.gql.__dict__ contains the gql attribute and throw an error if it doesn't).

  • Hide (e.g., within existing functions) all variables that the user should not change.

  • Merge steps {5,6}, {9,10}, and {13,10} <--> E.g., hide the daptics.poll_for_current_task() call in a while loop within daptics.save_experimental_and_space_parameters_csv() / daptics.save_experimental_and_space_parameters_csv() / daptics.save_experiment_responses_csv() and break the loop when status == success.

  • Document how to execute a code block ("getting started")

  • Document how to re-start from scratch ("getting started")

  • Document how to implement the [generate design - upload responses] loop (for users who would benefit from full automation, e.g., users that perform simulated experiments)

  • Document that there is a way to go from design to design + responses = experiments for then next design without using CSV files, using in-memory Python structures only.

  • Document how to retrieve the design/responses of generation n.

  • Keep track of the last completed task/step, so that the function(s) called at step X can return an error message like "You can run this step only after running Step Y" if the user tries to run step X at the wrong time.

  • Document how to recover from a failed step

  • [postpone] Add input/file validation (e.g., return informative errors if file format is wrong).

  • [not natural] Change step index from "Step 1", "Step 2", ... to "Step A", "Step B", ... (to avoid confusion between step index and the index within brackets in In [ ] on the left side of each code cell).

Step 3

  • If you create a session with the same name of a previously API-generated session you get an error (OK), but the error message is not understandable.

  • Will the user ever want to set is_Demo=True? If so, explain when/why in the documentation. If not, hide the is_Demo=False statement (e.g., within daptics.create_session())

  • This piece of documentation may be a bit unclear: "To avoid parsing bugs, add blank padding columns for rows that have fewer parameter values, so that the CSV file has the same number of columns on each row." Maybe add an example?

  • Documentation: A bit unclear what is meant by "home page" of the "Jupyter server".

  • Documentation: factorial_space.csv is inconsistent with fname = 'esd-mixture-5.csv'.

Step 7

  • I don't quite understand this step: why are we exporting to/reimporting from an extra validated_space.csv file? Can't we simply print the space file that was specified in Step 5 (or skip Step 7 altogether)?

Step 8

  • Documentation: clarify that we present three different options/ways of running step 8, and that the user should choose one of them.

  • BUG: The first of the three Step-8 options generates an error (NameError: name 'colHeaders' is not defined)

  • Documentation (alternate step): The parameter names in the file are V11, V12, V13, not param1,param2,param3,param4. Maybe change names in the file to param1,param2,param3 and remove param4 in the documentation?

Step 9

  • Documentation: "Submit the initial experiments file downloaded" --> "Submit the initial experiments file created" ?

Step 11

  • gen and fname should be calculated automatically, and not be left for the user to define (see 'Hide ...' in General); remove the 'We always need to specify the design generation number to retrieve a design" bit in the documentation.

  • Why not saving the N-th design directly to genN_experiments.csv, rather than saving it to genN_design.csv and having the user manually duplicate and rename that file in the next step?

Step 12

  • Documentation: clarify that we present two different ways of running step 12, and that the user should choose one of them.

  • Documentation: explain what the user should to if he does have extra experiments (and explain what extra experiments are)

Step 13

  • Often freezes with errors like:
status = running -- 37 seconds.
Error:  HTTPConnectionPool(host='inertia.protolife.com', port=8080): Read timed out. (read timeout=None)

Missing site files in new pdoc/mkdocs build

Some of these errors might be due to opening the docs on the filesystem rather than from a static web server:

  • 1. highlight.min.js not loaded
  • 2. hljs is not defined at index.html:29
  • 3. fontawesome-webfont.woff not loaded
  • 4. RobotoSlab-Bold.ttf not loaded
  • 5. Script '/docs/search/worker.js' cannot be accessed from origin 'null' at /docs/search/main.js:93:22

More docs available directly in repo

Right now, README.md of repo describes how to build documentation using MkDocs and pydocmd.

I think some version of basic documentation should be available directly from the repo's README.md (without going through the doc build process).

`daptics.wait_for_current_task()` should wait for all tasks in queue?

Inside a loop I have following code. It worked fine for gen 3, but failed on gen 4.

Maybe I should follow task = daptics.export_csv(fname,myexps) with a daptics.wait_for_current_task()?? Maybe for gen 3 it finished fast enough to not get tripped up?

Could it be that daptics.wait_for_current_task() should wait for all tasks in queue?

    print('time for gen',daptics.gen,':  ',time.time()-start)

    # put in the responses
    myexps = design['table']
    for i in range(len(myexps['data'])):
        myexps['data'][i][-1]= str(resp[i])
        
    # e.g. fname = 'Experiments/gen1_experiments.csv'
    task = daptics.export_csv(fname,myexps)

    # save responses and generate next design
    start = time.time()
 task=daptics.put_experiments_csv(DapticsExperimentsType.DESIGNED_WITH_OPTIONAL_EXTRAS,fname) # gen next design
    daptics.wait_for_current_task()
    task = daptics.generate_design()
    daptics.wait_for_current_task()
    print('Done with design.')
    print('time for daptics = ',time.time()-start,"   seconds.")
    print("Now gen = ",daptics.gen)
    fname = 'gen'+str(daptics.gen)+'_experiments.csv'
    # e.g. fname = 'gen1_design.csv'
    print("Saving design to: ",fname)
    design = daptics.export_generated_design_csv(fname) # the new design with blank responses.

I got

time for gen 4 :   1177.0454638004303

Task status = running after 1806 retries...
No current task was found!Done with design.
time for daptics =  2369.008800983429    seconds.
Now gen =  4
Saving design to:  gen4_experiments.csv
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<ipython-input-30-3bb54a625a83> in <module>
     51     # e.g. fname = 'gen1_design.csv'
     52     print("Saving design to: ",fname)
---> 53     design = daptics.export_generated_design_csv(fname) # the new design with blank responses.
     54 
     55     pool.close()

~/Projects/genelife/fastgenegol/genelifepy/daptics-api/python_client/daptics_client.py in export_generated_design_csv(self, fname, gen)

...

         if result.errors:
---> 52             raise Exception(str(result.errors[0]))
     53 
     54         return result.data

Exception: {'path': ['experiments'], 'message': 'timeout', 'locations': [{'line': 2, 'column': 0}]}

bug: PDFs are not rendering in notebooks in Chrome

The PDF files are fine, but in the Jupyter notebook when trying to open or render them, you get a black box and the console has:

Failed to load 'about:blank' as a plugin, because the frame into which the 
plugin is loading is sandboxed.

Works fine in Firefox.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.