Git Product home page Git Product logo

healthcare's Introduction

Cloud Healthcare

How to Contribute

For Users:

Please click on the exact subfolder of this repository that you are interested in exploring. If you have any questions about a design pattern, an example, or a utility, please post in Github Discussions or file a Github Issue using Github Actions.

For Partners:

Please click on the exact subfolder of this repository that you are interested in exploring. If you have any questions about design patterns, architectures, or utilities, please post in Github Discussions or file a Github Issue using Github Actions. If you would like to contribute to this repository, please create a pull request and assign it to a Google reviewer (list is below). We will later automate this process via Github Actions.

For Googlers:

Please click on the exact subfolder of this repository that you are interested in exploring. If you have any questions about design patterns, architectures, or utilities, please post in Github Discussions or file a Github Issue using Github Actions. If you would like to contribute to this repository, please create a pull request and assign it to a Google reviewer (list is below). We will later automate this process via Github Actions. If you are assigned a pull request, please review it within 24-48 hours to maintain good reviewing etiquette.

Links to GCP Healthcare Repositories and Point of Contact

Repository Point of Contact
Healthcare Dataflow Harmonization Maintained internally at Google. Contact ygupta89@.
Healthcare DICOM Web Adapter Contact jasonklotzer@.
Healthcare Data Labeling Not actively maintained on OSS.

Contributor License Agreement

Contributions to this project must be accompanied by a Contributor License Agreement. You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project. Head over to https://cla.developers.google.com/ to see your current agreements on file or to sign a new one.

You generally only need to submit a CLA once, so if you've already submitted one (even if it was for a different project), you probably don't need to do it again.

Code reviews

All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help for more information on using pull requests.

healthcare's People

Contributors

ajitsonawane1 avatar aleksandrhovhannisyan avatar chenmoneygithub avatar codezizo avatar cushon avatar dependabot[bot] avatar devanshmodi avatar dgparker avatar dufton avatar gruihuang avatar lastomato avatar mfomitchev avatar midjones avatar nikklassen avatar pavertomato avatar rachelgagnon avatar rbkucera avatar rchen152 avatar sadeel avatar surjits254 avatar svetakvsundhar avatar toby-hu avatar tompollard avatar umairidris avatar xingao267 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

healthcare's Issues

Data loading script failing

Hello
Can you please help me out with the information of the folder name from where one need to execute following command ?
bazel run //datagen:data_gen -- --num=1 --output_path=demo_data.ndjson

I tried running it on "healthcare/fhir/immunizations_demo" folder but it failed with below error

INFO: Invocation ID: 26092ad2-2ae0-49c9-ac2c-4988bf053007
ERROR: Skipping '//datagen:data_gen': no such package 'datagen': BUILD file not found on package path
WARNING: Target pattern parsing failed.
ERROR: no such package 'datagen': BUILD file not found on package path
INFO: Elapsed time: 0.070s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
FAILED: Build did NOT complete successfully (0 packages loaded)

`mimic_eicu/tutorials/BigQuery_ML.ipynb` notebook needs some changes

In this notebook: https://github.com/GoogleCloudPlatform/healthcare/blob/master/datathon/mimic_eicu/tutorials/BigQuery_ML.ipynb

I believe that this snippet:

# Set up the substitution preprocessing injection
if bigquery.magics._run_query.func_name != 'format_and_run_query':
  original_run_query = bigquery.magics._run_query

needs to be modified in 2 ways:

bigquery.magics._run_query --> bigquery.magics.magics._run_query
In py3 (which I'm assuming everyone is using now), func_name --> __name__

Lung cancer training command error

I am working through an exercise on qwiklabs

when I try to train the model via the command:

python3 -m models.trainer.model   --training_data=gs://${BUCKET?}/tfrecords/training.tfrecord   --eval_data=gs://${BUCKET?}/tfrecords/eval.tfrecord   --model_dir=gs://${BUCKET?}/model   --training_steps=3000   --eval_steps=1000   --learning_rate=0.1   --export_model_dir=gs://${BUCKET?}/saved_model

I get the following error:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/google5577230_student/healthcare/fhir/lung-cancer/models/trainer/model.py", line 36, in <module>
    tf.flags.DEFINE_string(
AttributeError: module 'tensorflow' has no attribute 'flags'

Exposing EHR data via google cloud FHIR api

If we have an EHR deployed on my premises, can we make google FHIR API on top of that exposing my EHR's data via google FHIR api that is hosted on google cloud?

Is this possible? or is this a legitimate usecase?

tf.python_io.TFRecordWriter deprecation warning

Hi,

On execution of Step3 in cloud shell received error in line 121 of assemble_training_data.py.
The name tf.python_io.TFRecordWriter is deprecated. Please use tf.io.TFRecordWriter instead.

Tfrecords does get successfully created in the storage bucket. 
  • Hari

Required IAM permission is not mentioned anywhere

Hi Team,

Thank you for the tutorial but can you please mention the required IAM permission to complete above tutorial.
In corporate setup we have to ask our system admin again & again for IAM role as we progress with tutorial rather than in one go. This will help alot.

Thanks,
Rohit Shah.

deployment of inference module is failing in breast_density_auto_ml.ipynb

I am using python3
all required API's are Enable

WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using --no-enable-ip-alias flag. Use --[no-]enable-ip-alias flag to suppress this warning.
WARNING: Newly created clusters and node-pools will have node auto-upgrade enabled by default. This can be disabled using the --no-enable-autoupgrade flag.
WARNING: Starting with version 1.18, clusters will have shielded GKE nodes by default.
WARNING: Your Pod address range (--cluster-ipv4-cidr) can accommodate at most 1008 node(s).
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Permission denied on 'locations/{location}' (or it may not exist).
Error from server (NotFound): the server could not find the requested resource

solution on this similar issue #400 (comment) not working for me.

@codezizo

dicom-export-adapter issue on Google Project

I'm using DICOM adapter hosted on Google project. Import is working fine but export is crashing
image

I'm using exact flags you have provided. except subscription id. i had created pub sub topic and subscription i.e mySub and added mySub as Id but still not working.

  • name: dicom-export-adapter
    image: gcr.io/cloud-healthcare-containers/dicom-export-adapter:latest
    command:
    - "--peer_dimse_aet=PEERAET"
    - "--peer_dimse_ip=localhost"
    - "--peer_dimse_port=104"
    - "--project_id=insertingmyProjectIDhere"
    - "--subscription_id=mysub"
    - "--dicomweb_addr=https://localhost:80"
    - "--oauth_scopes=https://www.googleapis.com/auth/pubsub"

Installing google-cloud-automl in breast_density_auto_ml.ipynb

!sudo pip3 install google-cloud-automl did not work for me.
It was giving me error about not able to find automl module.

I had to instead use
!python3 -m pip install google-cloud-automl

which fixed the issue for me. This is just an FYI...if others come across same issue.

bigquery-json.googleapis.com is not a valid API in new terraform version

When running:

bazel run cmd/apply:apply -- --config_path=<config_file_path> --enable_terraform

I get the error:

Error: expected service to not match any of [dataproc-control.googleapis.com source.googleapis.com stackdriverprovisioning.googleapis.com bigquery-json.googleapis.com], got bigquery-json.googleapis.com

Please check this line in the terraform-provider-google Github repository for the reason why.

I fixed it locally by changing "bigquery-json.googleapis.com" to "bigquery.googleapis.com" here.

However, I only fixed this locally and I don't know if this problem extends beyond the deployment scripts.

resource not found error on creating immunization with reaction

I am able to create immunization when reaction is chosen as None. However if i choose any reaction from the dropdown. it is throwing the below error on "await this.resourceService.saveResource(this.reaction);"

Here is the snippet of error Log

{
  "issue": [
    {
      "code": "no-store",
      "details": {
        "text": "storage_error"
      },
      "diagnostics": "resource not found: Observation/57fb1c67-065d-4360-b837-a25077435ed6-reaction",
      "severity": "error"
    }
  ],
  "resourceType": "OperationOutcome"
}

AssertionError: Failed getting instance UID for series

Script store_tcia_in_hc_api is throwing below error when running in datalab:

There are 2149 instances to upload...
Traceback (most recent call last):
File "/usr/local/envs/py2env/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/usr/local/envs/py2env/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/content/datalab/healthcare/imaging/ml_codelab/scripts/store_tcia_in_hc_api.py", line 191, in
main()
File "/content/datalab/healthcare/imaging/ml_codelab/scripts/store_tcia_in_hc_api.py", line 87, in main
study_uid_to_series_uid.values()), 1):
File "/usr/local/envs/py2env/lib/python2.7/multiprocessing/pool.py", line 673, in next
raise value
AssertionError: Failed getting instance UID for series 1.3.6.1.4.1.9590.100.1.2.171166015912754582622813408142073790722

ExportDicomData API to convert DICOMs to JPEGs fails

AssertionError: error exporting to JPEG, code: 400, response: {
"error": {
"code": 400,
"message": "Invalid JSON payload received. Unknown name "output_config": Cannot find field.",
"status": "INVALID_ARGUMENT",
"details": [
{
"@type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"description": "Invalid JSON payload received. Unknown name "output_config": Cannot find field."
}
]
}
]
}
}

Importing data into FHIR store failing

curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -d '{"gcsSourceLocation":{"gcsUri":"gs://'${PROJECT_ID?}'/'demo_data.ndjson'"}}' "https://healthcare.googleapis.com/${API_VERSION?}/projects/${PROJECT_ID?}/locations/${REGION?}/datasets/${DATASET_ID?}/fhirStores/${FHIR_STORE_ID?}:import"
{
"error": {
"code": 400,
"message": "Invalid JSON payload received. Unknown name "{"gcsSourceLocation":{"gcsUri":"gs://dicom-poc/demo_data.ndjson"}}": Cannot bind query parameter. Field '{"gcsSourceLocation":{"gcsUri":"gs://dicom-poc/demo_data' could not be found in request message.",
"status": "INVALID_ARGUMENT",
"details": [
{
"@type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"description": "Invalid JSON payload received. Unknown name "{"gcsSourceLocation":{"gcsUri":"gs://dicom-poc/demo_data.ndjson"}}": Cannot bind query parameter. Field '{"gcsSourceLocation":{"gcsUri":"gs://dicom-poc/demo_data' could not be found in request message."
}
]
}
]
}
}

Not sure why it's looking for demo_data instead of demo_data.ndjson resource

Inappropriate value for attribute "metadata": element "items": string required.

I am referring to https://github.com/GoogleCloudPlatform/healthcare/tree/master/deploy to enable Data Protection Toolkit

cat config.yaml
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
 
# This sample configuration provides the minimum configuration required by the DPT scripts.
# Audit resources will be created locally in the project.
 
overall:
  organization_id: '7954763295'
  billing_account: 89NJCG-KL987Y-LPIU76
  domain: mydomain.com
 
generated_fields_path: ./generated_fields.yaml
 
projects:
- project_id: ghcdrupalprojectdpt
  owners_group: [email protected]
  auditors_group: [email protected]
  audit:
    logs_bigquery_dataset:
      dataset_id: mydomain_ghcdrupalprojectdpt001_logs  # Bigquery Dataset names must use underscores.
      location: US
  devops:
    state_storage_bucket:
      name: mydomain-ghcdrupalprojectdpt-state
      location: US
  compute_instances:
  - name: ghcdrupalprojectdpt-instance
    zone: us-central1-a
    machine_type: n1-standard-1
    boot_disk:
      initialize_params:
        image: debian-cloud/debian-9
    network_interface:
      network: default
    metadata:
      items:
        - key: startup-script
          value: sudo apt-get update

I am running the bazel run command which is as below. I am encountering this issue facing Inappropriate value for attribute "metadata": element "items": string required.

bazel run cmd/apply:apply -- --config_path=config.yaml --projects=ghcdrupalprojectdpt

2020/04/11 20:48:18 Running: [terraform apply]
Releasing state lock. This may take a few moments...
2020/04/11 20:48:23 Failed to apply configs: failed to apply "ghcdrupalprojectdpt": failed to apply resources: failed to apply plan: exit status 1:
Error: Incorrect attribute value type
 
  on main.tf.json line 156, in resource[5].google_compute_instance.ghcdrupalprojectdpt-instance:
 156:      "metadata": {
 157:       "items": [
 158:        {
 159:         "key": "startup-script",
 160:         "value": "sudo apt-get update"
 161:        }
 162:       ]
 163:      },
 
Inappropriate value for attribute "metadata": element "items": string
required.

demo data failing to upload to GCS

Uploading generated data to GCS using below command is failing:
gsutil -u ${PROJECT_ID?} cp demo_data.ndjson gs://${PROJECT_ID?}/demo_data.ndjson

CommandException: No URLs matched: demo_data.ndjson

FYI, the previous command completed successfully for me:
bazel run //datagen:data_gen -- --num=1 --output_path=demo_data.ndjson

startup-script does not get invoked on the remote VM instance

Hi,

When I ran the below command, the startup-script does not get invoked on the remote VM instance.

bazel run cmd/apply:apply -- --config_path=config.yaml --projects=ghcdrupalproject

cat config.yaml
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# This sample configuration provides the minimum configuration required by the DPT scripts.
# Audit resources will be created locally in the project.
overall:
  organization_id: '752989131665'
  billing_account: 01B8CG-A6J720-GF5JC6
  domain: example.com

generated_fields_path: ./generated_fields.yaml

projects:
- project_id: ghcdrupalproject
  owners_group: [email protected]
  auditors_group: [email protected]
  audit:
    logs_bigquery_dataset:
      dataset_id: digitalapicraft_ghcdrupalproject001_logs  # Bigquery Dataset names must use underscores.
      location: US
  devops:
    state_storage_bucket:
      name: digitalapicraft-ghcdrupalproject-state
      location: US
  compute_firewalls:
  - name: ghcdrupal-firewall
    network: default
    allow:
      protocol: "tcp"
      ports: ["22","80","443"]
  compute_instances:
  - name: ghcdrupalinstance
    zone: us-central1-a
    machine_type: n1-standard-2
    boot_disk:
      initialize_params:
        image: centos-cloud/centos-7-v20200309
    network_interface:
      network: default
      access_config: {}
    metadata:
      startup-script: "yum -y install git.x86_64; cd /opt; touch /root/.gitcookies; chmod 0600 /root/.gitcookies; git config --global http.cookiefile /root/.gitcookies tr , \\t <<\__END__ >>/root/.gitcookies source.developers.google.com,FALSE,/,TRUE,2147483647,o,git-kaushal.example.com=UV0cSkzM9t4BCjjy6VwnqDO3m-F3BbJbDSgkL9qNx8VzUaiLna3yvvvG92miujo __END__; git clone https://source.developers.example.com/p/dacthir/r/testhealth portalCode; cd /opt/portalCode/scripts; sh -xv /opt/portalCode/scripts/installnginxmariadbdrupalghc.sh"
    service_account:
      email: [email protected]
      scopes:
      - cloud-platform

Flutter Access

I've been building a FHIR library for Dart/Flutter (https://github.com/fhir-fli/fhir). All resources are defined, it will securely store data on the device, and it's able to make all of the RESTful calls according to the FHIR spec. But I've run into an issue with authentication and authorization. Should I be able to access a fhirstore through an android app? Also, have you had any thought to modeling a FHIR database in cloud firestore?

Question on imaging/ml/ml_codelab/scripts/inference/inference.py file

What's the purpose of below code in healthcare/imaging/ml/ml_codelab/scripts/inference/inference.py file ?

if not parsed_message:
_logger.info('Ignoring new message: %s', image_instance_path)
message.ack()
return

Why the message is being acknowledged even though the message is being ignored? I would imaging message to be acknowledged after it has been processed successfully.

Ingesting HL7v2 data from GCS failing

Command:

curl -X POST \
     -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
     -H "Content-Type: application/json; charset=utf-8" \
     --data'gcsSource': {
        'uri': 'gs://$(PROJECT_ID)/hl7v2-sample.json'
      } \
     "https://healthcare.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/datasets/DATASET_ID/hl7V2Stores/HL7V2_STORE_ID/messages:ingest"

Response:

"error": {
    "code": 400,
    "message": "Invalid JSON payload received. Unknown name \"gcsSource\": Cannot find field.",
    "status": "INVALID_ARGUMENT",
    "details": [
      {
        "@type": "type.googleapis.com/google.rpc.BadRequest",
        "fieldViolations": [
          {
            "description": "Invalid JSON payload received. Unknown name \"gcsSource\": Cannot find field."
          }
        ]
      }
    ]
  }

Not sure why it's not able to locate the field. I have also tried using "gcsSourceLocation" and "gcsUri" due to another issue I found here, however I still received the same error.

build step 0 "gcr.io/cloud-builders/docker" failed

I get the following error when building the inference module on Breast Density Classification Model on AutoML Vision. Any help would be appreciated.

%%bash -s {project_id}
PROJECT_ID=$1

gcloud builds submit --config scripts/inference/cloudbuild.yaml --timeout 1h scripts/inference

----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "c51add17-dad1-46b9-8524-75fe2c094faa"

FETCHSOURCE
Fetching storage object: gs://my-datalab-tutorials_cloudbuild/source/1566312982.06-8ee4df9c86394d5781360b6d176b8c12.tgz#1566312983140069
Copying gs://my-datalab-tutorials_cloudbuild/source/1566312982.06-8ee4df9c86394d5781360b6d176b8c12.tgz#1566312983140069...
/ [1 files][  8.9 KiB/  8.9 KiB]                                                
Operation completed over 1 objects/8.9 KiB.                                      
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon  36.35kB
Step 1/5 : FROM google/cloud-sdk
latest: Pulling from google/cloud-sdk
22dbe790f715: Pulling fs layer
9b50d9fc3c82: Pulling fs layer
61a04f82847e: Pulling fs layer
9b50d9fc3c82: Verifying Checksum
9b50d9fc3c82: Download complete
22dbe790f715: Verifying Checksum
22dbe790f715: Download complete
61a04f82847e: Verifying Checksum
61a04f82847e: Download complete
22dbe790f715: Pull complete
9b50d9fc3c82: Pull complete
61a04f82847e: Pull complete
Digest: sha256:d026fcb44de9f3ac58ed959afa892d8216de858dc69b370e001d641a4e362437
Status: Downloaded newer image for google/cloud-sdk:latest
 ---> fdb2213adc87
Step 2/5 : RUN mkdir -p /opt/inference_module/src && mkdir -p /opt/inference_module/bin
 ---> Running in 0408360c19ae
Removing intermediate container 0408360c19ae
 ---> 43f5c80f3aef
Step 3/5 : ADD / /opt/inference_module/src/
 ---> 3ae13bc0ba75
Step 4/5 : RUN pip install --upgrade pip && pip install --upgrade virtualenv &&     virtualenv /opt/inference_module/venv &&     . /opt/inference_module/venv/bin/activate &&     cd /opt/inference_module/src/ &&     python setup.py install
 ---> Running in c647ac561529
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Requirement already up-to-date: pip in /usr/local/lib/python2.7/dist-packages/pip-19.2.2-py2.7.egg (19.2.2)
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Collecting virtualenv
  Downloading https://files.pythonhosted.org/packages/5e/6a/fa7e7f533595402040c831500bb10576e1f4b8f54d476f3994c7c55d8f5e/virtualenv-16.7.3-py2.py3-none-any.whl (3.3MB)
Installing collected packages: virtualenv
Successfully installed virtualenv-16.7.3
New python executable in /opt/inference_module/venv/bin/python
Installing setuptools, pip, wheel...
done.
running install
running bdist_egg
running egg_info
creating breast_density_inference_module.egg-info
writing requirements to breast_density_inference_module.egg-info/requires.txt
writing breast_density_inference_module.egg-info/PKG-INFO
writing top-level names to breast_density_inference_module.egg-info/top_level.txt
writing dependency_links to breast_density_inference_module.egg-info/dependency_links.txt
writing manifest file 'breast_density_inference_module.egg-info/SOURCES.txt'
reading manifest file 'breast_density_inference_module.egg-info/SOURCES.txt'
writing manifest file 'breast_density_inference_module.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
warning: install_lib: 'build/lib.linux-x86_64-2.7' does not exist -- no Python modules to install

creating build
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying breast_density_inference_module.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying breast_density_inference_module.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying breast_density_inference_module.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying breast_density_inference_module.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying breast_density_inference_module.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating dist
creating 'dist/breast_density_inference_module-0.1-py2.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing breast_density_inference_module-0.1-py2.7.egg
Copying breast_density_inference_module-0.1-py2.7.egg to /opt/inference_module/venv/lib/python2.7/site-packages
Adding breast-density-inference-module 0.1 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/breast_density_inference_module-0.1-py2.7.egg
Processing dependencies for breast-density-inference-module==0.1
Searching for attrs
Reading https://pypi.org/simple/attrs/
Downloading https://files.pythonhosted.org/packages/23/96/d828354fa2dbdf216eaa7b7de0db692f12c234f7ef888cc14980ef40d1d2/attrs-19.1.0-py2.py3-none-any.whl#sha256=69c0dbf2ed392de1cb5ec704444b08a5ef81680a61cb899dc08127123af36a79
Best match: attrs 19.1.0
Processing attrs-19.1.0-py2.py3-none-any.whl
Installing attrs-19.1.0-py2.py3-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
writing requirements to /opt/inference_module/venv/lib/python2.7/site-packages/attrs-19.1.0-py2.7.egg/EGG-INFO/requires.txt
Adding attrs 19.1.0 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/attrs-19.1.0-py2.7.egg
Searching for google-cloud-automl
Reading https://pypi.org/simple/google-cloud-automl/
Downloading https://files.pythonhosted.org/packages/1a/d2/54d91e8bda501ea80126d1c46f255008bba4e03ce3fa4f3738511f96def1/google_cloud_automl-0.4.0-py2.py3-none-any.whl#sha256=c4925ffc538a2123ff00ecff81583218af91b0574bca2a80c39ea09b7a2e4af2
Best match: google-cloud-automl 0.4.0
Processing google_cloud_automl-0.4.0-py2.py3-none-any.whl
Installing google_cloud_automl-0.4.0-py2.py3-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
writing requirements to /opt/inference_module/venv/lib/python2.7/site-packages/google_cloud_automl-0.4.0-py2.7.egg/EGG-INFO/requires.txt
Adding google-cloud-automl 0.4.0 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/google_cloud_automl-0.4.0-py2.7.egg
Searching for oauth2client
Reading https://pypi.org/simple/oauth2client/
Downloading https://files.pythonhosted.org/packages/95/a9/4f25a14d23f0786b64875b91784607c2277eff25d48f915e39ff0cff505a/oauth2client-4.1.3-py2.py3-none-any.whl#sha256=b8a81cc5d60e2d364f0b1b98f958dbd472887acaf1a5b05e21c28c31a2d6d3ac
Best match: oauth2client 4.1.3
Processing oauth2client-4.1.3-py2.py3-none-any.whl
Installing oauth2client-4.1.3-py2.py3-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
writing requirements to /opt/inference_module/venv/lib/python2.7/site-packages/oauth2client-4.1.3-py2.7.egg/EGG-INFO/requires.txt
Adding oauth2client 4.1.3 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/oauth2client-4.1.3-py2.7.egg
Searching for httplib2
Reading https://pypi.org/simple/httplib2/
Downloading https://files.pythonhosted.org/packages/78/23/bb9606e87a66fd8c72a2b1a75b049d3859a122bc2648915be845bc44e04f/httplib2-0.13.1.tar.gz#sha256=6901c8c0ffcf721f9ce270ad86da37bc2b4d32b8802d4a9cec38274898a64044
Best match: httplib2 0.13.1
Processing httplib2-0.13.1.tar.gz
Writing /tmp/easy_install-ucQzLn/httplib2-0.13.1/setup.cfg
Running httplib2-0.13.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-ucQzLn/httplib2-0.13.1/egg-dist-tmp-Rnehvt
zip_safe flag not set; analyzing archive contents...
httplib2.certs: module references __file__
creating /opt/inference_module/venv/lib/python2.7/site-packages/httplib2-0.13.1-py2.7.egg
Extracting httplib2-0.13.1-py2.7.egg to /opt/inference_module/venv/lib/python2.7/site-packages
Adding httplib2 0.13.1 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/httplib2-0.13.1-py2.7.egg
Searching for google-cloud-pubsub
Reading https://pypi.org/simple/google-cloud-pubsub/
Downloading https://files.pythonhosted.org/packages/ff/44/e9ed21d9e56a4c0c7e6f8657368bc6af9a8e34f613ecfdd616b7291d95b3/google_cloud_pubsub-0.45.0-py2.py3-none-any.whl#sha256=260ec6ab8bf8fe9d21cb11b445160eeead0d6578978afbe48769c7088c702be9
Best match: google-cloud-pubsub 0.45.0
Processing google_cloud_pubsub-0.45.0-py2.py3-none-any.whl
Installing google_cloud_pubsub-0.45.0-py2.py3-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
writing requirements to /opt/inference_module/venv/lib/python2.7/site-packages/google_cloud_pubsub-0.45.0-py2.7.egg/EGG-INFO/requires.txt
Adding google-cloud-pubsub 0.45.0 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/google_cloud_pubsub-0.45.0-py2.7.egg
Searching for googleapis-common-protos==1.5.3
Reading https://pypi.org/simple/googleapis-common-protos/
Downloading https://files.pythonhosted.org/packages/00/03/d25bed04ec8d930bcfa488ba81a2ecbf7eb36ae3ffd7e8f5be0d036a89c9/googleapis-common-protos-1.5.3.tar.gz#sha256=c075eddaa2628ab519e01b7d75b76e66c40eaa50fc52758d8225f84708950ef2
Best match: googleapis-common-protos 1.5.3
Processing googleapis-common-protos-1.5.3.tar.gz
Writing /tmp/easy_install-blImn8/googleapis-common-protos-1.5.3/setup.cfg
Running googleapis-common-protos-1.5.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-blImn8/googleapis-common-protos-1.5.3/egg-dist-tmp-a05X9R
zip_safe flag not set; analyzing archive contents...
Moving googleapis_common_protos-1.5.3-py2.7.egg to /opt/inference_module/venv/lib/python2.7/site-packages
Adding googleapis-common-protos 1.5.3 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/googleapis_common_protos-1.5.3-py2.7.egg
Searching for google-api-core
Reading https://pypi.org/simple/google-api-core/
Downloading https://files.pythonhosted.org/packages/71/e5/7059475b3013a3c75abe35015c5761735ab224eb1b129fee7c8e376e7805/google_api_core-1.14.2-py2.py3-none-any.whl#sha256=b2b91107bcc3b981633c89602b46451f6474973089febab3ee51c49cb7ae6a1f
Best match: google-api-core 1.14.2
Processing google_api_core-1.14.2-py2.py3-none-any.whl
Installing google_api_core-1.14.2-py2.py3-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
writing requirements to /opt/inference_module/venv/lib/python2.7/site-packages/google_api_core-1.14.2-py2.7.egg/EGG-INFO/requires.txt
Adding google-api-core 1.14.2 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/google_api_core-1.14.2-py2.7.egg
Searching for google-api-python-client
Reading https://pypi.org/simple/google-api-python-client/
Downloading https://files.pythonhosted.org/packages/31/c7/16ca16d28f2d71c8bd6fa67c91eb2a82259dc589c0504f903b675ecdaa84/google_api_python_client-1.7.11-py2-none-any.whl#sha256=3121d55d106ef1a2756e8074239512055bd99eb44da417b3dd680f9a1385adec
Best match: google-api-python-client 1.7.11
Processing google_api_python_client-1.7.11-py2-none-any.whl
Installing google_api_python_client-1.7.11-py2-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
writing requirements to /opt/inference_module/venv/lib/python2.7/site-packages/google_api_python_client-1.7.11-py2.7.egg/EGG-INFO/requires.txt
Adding google-api-python-client 1.7.11 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/google_api_python_client-1.7.11-py2.7.egg
Searching for requests-toolbelt
Reading https://pypi.org/simple/requests-toolbelt/
Downloading https://files.pythonhosted.org/packages/60/ef/7681134338fc097acef8d9b2f8abe0458e4d87559c689a8c306d0957ece5/requests_toolbelt-0.9.1-py2.py3-none-any.whl#sha256=380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f
Best match: requests-toolbelt 0.9.1
Processing requests_toolbelt-0.9.1-py2.py3-none-any.whl
Installing requests_toolbelt-0.9.1-py2.py3-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
writing requirements to /opt/inference_module/venv/lib/python2.7/site-packages/requests_toolbelt-0.9.1-py2.7.egg/EGG-INFO/requires.txt
Adding requests-toolbelt 0.9.1 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/requests_toolbelt-0.9.1-py2.7.egg
Searching for enum34
Reading https://pypi.org/simple/enum34/
Downloading https://files.pythonhosted.org/packages/c5/db/e56e6b4bbac7c4a06de1c50de6fe1ef3810018ae11732a50f15f62c7d050/enum34-1.1.6-py2-none-any.whl#sha256=6bd0f6ad48ec2aa117d3d141940d484deccda84d4fcd884f5c3d93c23ecd8c79
Best match: enum34 1.1.6
Processing enum34-1.1.6-py2-none-any.whl
Installing enum34-1.1.6-py2-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
Adding enum34 1.1.6 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/enum34-1.1.6-py2.7.egg
Searching for six>=1.6.1
Reading https://pypi.org/simple/six/
Downloading https://files.pythonhosted.org/packages/73/fb/00a976f728d0d1fecfe898238ce23f502a721c0ac0ecfedb80e0d88c64e9/six-1.12.0-py2.py3-none-any.whl#sha256=3350809f0555b11f552448330d0b52d5f24c91a322ea4a15ef22629740f3761c
Best match: six 1.12.0
Processing six-1.12.0-py2.py3-none-any.whl
Installing six-1.12.0-py2.py3-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
Adding six 1.12.0 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/six-1.12.0-py2.7.egg
Searching for rsa>=3.1.4
Reading https://pypi.org/simple/rsa/
Downloading https://files.pythonhosted.org/packages/02/e5/38518af393f7c214357079ce67a317307936896e961e35450b70fad2a9cf/rsa-4.0-py2.py3-none-any.whl#sha256=14ba45700ff1ec9eeb206a2ce76b32814958a98e372006c8fb76ba820211be66
Best match: rsa 4.0
Processing rsa-4.0-py2.py3-none-any.whl
Installing rsa-4.0-py2.py3-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
writing requirements to /opt/inference_module/venv/lib/python2.7/site-packages/rsa-4.0-py2.7.egg/EGG-INFO/requires.txt
Adding rsa 4.0 to easy-install.pth file
Installing pyrsa-encrypt script to /opt/inference_module/venv/bin
Installing pyrsa-verify script to /opt/inference_module/venv/bin
Installing pyrsa-sign script to /opt/inference_module/venv/bin
Installing pyrsa-priv2pub script to /opt/inference_module/venv/bin
Installing pyrsa-decrypt script to /opt/inference_module/venv/bin
Installing pyrsa-keygen script to /opt/inference_module/venv/bin

Installed /opt/inference_module/venv/lib/python2.7/site-packages/rsa-4.0-py2.7.egg
Searching for pyasn1>=0.1.7
Reading https://pypi.org/simple/pyasn1/
Downloading https://files.pythonhosted.org/packages/6a/6e/209351ec34b7d7807342e2bb6ff8a96eef1fd5dcac13bdbadf065c2bb55c/pyasn1-0.4.6-py2.py3-none-any.whl#sha256=3bb81821d47b17146049e7574ab4bf1e315eb7aead30efe5d6a9ca422c9710be
Best match: pyasn1 0.4.6
Processing pyasn1-0.4.6-py2.py3-none-any.whl
Installing pyasn1-0.4.6-py2.py3-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
Adding pyasn1 0.4.6 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/pyasn1-0.4.6-py2.7.egg
Searching for pyasn1-modules>=0.0.5
Reading https://pypi.org/simple/pyasn1-modules/
Downloading https://files.pythonhosted.org/packages/be/70/e5ea8afd6d08a4b99ebfc77bd1845248d56cfcf43d11f9dc324b9580a35c/pyasn1_modules-0.2.6-py2.py3-none-any.whl#sha256=e30199a9d221f1b26c885ff3d87fd08694dbbe18ed0e8e405a2a7126d30ce4c0
Best match: pyasn1-modules 0.2.6
Processing pyasn1_modules-0.2.6-py2.py3-none-any.whl
Installing pyasn1_modules-0.2.6-py2.py3-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
writing requirements to /opt/inference_module/venv/lib/python2.7/site-packages/pyasn1_modules-0.2.6-py2.7.egg/EGG-INFO/requires.txt
Adding pyasn1-modules 0.2.6 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/pyasn1_modules-0.2.6-py2.7.egg
Searching for grpc-google-iam-v1<0.13dev,>=0.12.3
Reading https://pypi.org/simple/grpc-google-iam-v1/
Downloading https://files.pythonhosted.org/packages/65/19/2060c8faa325fddc09aa67af98ffcb6813f39a0ad805679fa64815362b3a/grpc-google-iam-v1-0.12.3.tar.gz#sha256=0bfb5b56f648f457021a91c0df0db4934b6e0c300bd0f2de2333383fe958aa72
Best match: grpc-google-iam-v1 0.12.3
Processing grpc-google-iam-v1-0.12.3.tar.gz
Writing /tmp/easy_install-Lc6wP0/grpc-google-iam-v1-0.12.3/setup.cfg
Running grpc-google-iam-v1-0.12.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Lc6wP0/grpc-google-iam-v1-0.12.3/egg-dist-tmp-YmxQ4J
zip_safe flag not set; analyzing archive contents...
Moving grpc_google_iam_v1-0.12.3-py2.7.egg to /opt/inference_module/venv/lib/python2.7/site-packages
Adding grpc-google-iam-v1 0.12.3 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/grpc_google_iam_v1-0.12.3-py2.7.egg
Searching for protobuf>=3.0.0
Reading https://pypi.org/simple/protobuf/
Downloading https://files.pythonhosted.org/packages/c7/60/19c2c3b563c8a5ebbc5f17982fd794f415cfc4633a8248ab3e23a47662bc/protobuf-3.9.1-cp27-cp27mu-manylinux1_x86_64.whl#sha256=55f85b7808766e5e3f526818f5e2aeb5ba2edcc45bcccede46a3ccc19b569cb0
Best match: protobuf 3.9.1
Processing protobuf-3.9.1-cp27-cp27mu-manylinux1_x86_64.whl
Installing protobuf-3.9.1-cp27-cp27mu-manylinux1_x86_64.whl to /opt/inference_module/venv/lib/python2.7/site-packages
writing requirements to /opt/inference_module/venv/lib/python2.7/site-packages/protobuf-3.9.1-py2.7-linux-x86_64.egg/EGG-INFO/requires.txt
Adding protobuf 3.9.1 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/protobuf-3.9.1-py2.7-linux-x86_64.egg
Searching for requests<3.0.0dev,>=2.18.0
Reading https://pypi.org/simple/requests/
Downloading https://files.pythonhosted.org/packages/51/bd/23c926cd341ea6b7dd0b2a00aba99ae0f828be89d72b2190f27c11d4b7fb/requests-2.22.0-py2.py3-none-any.whl#sha256=9cf5292fcd0f598c671cfc1e0d7d1a7f13bb8085e9a590f48c010551dc6c4b31
Best match: requests 2.22.0
Processing requests-2.22.0-py2.py3-none-any.whl
Installing requests-2.22.0-py2.py3-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
writing requirements to /opt/inference_module/venv/lib/python2.7/site-packages/requests-2.22.0-py2.7.egg/EGG-INFO/requires.txt
Adding requests 2.22.0 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/requests-2.22.0-py2.7.egg
Searching for pytz
Reading https://pypi.org/simple/pytz/
Downloading https://files.pythonhosted.org/packages/87/76/46d697698a143e05f77bec5a526bf4e56a0be61d63425b68f4ba553b51f2/pytz-2019.2-py2.py3-none-any.whl#sha256=c894d57500a4cd2d5c71114aaab77dbab5eabd9022308ce5ac9bb93a60a6f0c7
Best match: pytz 2019.2
Processing pytz-2019.2-py2.py3-none-any.whl
Installing pytz-2019.2-py2.py3-none-any.whl to /opt/inference_module/venv/lib/python2.7/site-packages
Adding pytz 2019.2 to easy-install.pth file

Installed /opt/inference_module/venv/lib/python2.7/site-packages/pytz-2019.2-py2.7.egg
error: googleapis-common-protos 1.5.3 is installed but googleapis-common-protos<2.0dev,>=1.6.0 is required by set(['google-api-core'])
The command '/bin/sh -c pip install --upgrade pip && pip install --upgrade virtualenv &&     virtualenv /opt/inference_module/venv &&     . /opt/inference_module/venv/bin/activate &&     cd /opt/inference_module/src/ &&     python setup.py install' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1

--------------------------------------------------------------------------------

Creating temporary tarball archive of 7 file(s) totalling 29.1 KiB before compression.
Uploading tarball of [scripts/inference] to [gs://my-datalab-tutorials_cloudbuild/source/1566312982.06-8ee4df9c86394d5781360b6d176b8c12.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/my-datalab-tutorials/builds/c51add17-dad1-46b9-8524-75fe2c094faa].
Logs are available at [https://console.cloud.google.com/gcr/builds/c51add17-dad1-46b9-8524-75fe2c094faa?project=690702105461].
ERROR: (gcloud.builds.submit) build c51add17-dad1-46b9-8524-75fe2c094faa completed with status "FAILURE"

IAM group can't be assigned owner role

When running the simple configuration, the owner-group is attempted to be set as project owner which isn't possible.

  # google_project_iam_member.project["roles/owner group:[email protected]"] will be created
  + resource "google_project_iam_member" "project" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = "group:[email protected]"
      + project = "my-test-project-000"
      + role    = "roles/owner"
    }

Error running install on frontend: Argument of type is not assignable to parameter

I get the following error when running yarn install in the frontend directory:

> [email protected] compile /Users/gri306/Code/healthcare/fhir/immunizations_demo/frontend
> tsc -p .

src/test/resource-service-spy.ts:22:72 - error TS2345: Argument of type '{ createResource: undefined; deleteResource: Promise<void>; executeBatch: Promise<void>; getResource: Promise<void>; saveResource: Promise<void>; searchResource: Promise<Bundle>; }' is not assignable to parameter of type 'ReadonlyArray<"requests$" | "searchResource" | "getResource" | "createResource" | "saveResource" | "deleteResource" | "executeBatch"> | { requests$?: any; searchResource?: Promise<...> | undefined; ... 4 more ...; executeBatch?: Promise<...> | undefined; }'.
  Type '{ createResource: undefined; deleteResource: Promise<void>; executeBatch: Promise<void>; getResource: Promise<void>; saveResource: Promise<void>; searchResource: Promise<Bundle>; }' is not assignable to type '{ requests$?: any; searchResource?: Promise<Bundle> | undefined; getResource?: Promise<Immunization | DomainResource | Bundle | Account | ActivityDefinition | AdverseEvent | ... 111 more ... | Parameters> | undefined; createResource?: Promise<...> | undefined; saveResource?: Promise<...> | undefined; deleteResource?...'.
    Types of property 'getResource' are incompatible.
      Type 'Promise<void>' is not assignable to type 'Promise<Immunization | DomainResource | Bundle | Account | ActivityDefinition | AdverseEvent | AllergyIntolerance | Appointment | AppointmentResponse | AuditEvent | Basic | ... 106 more ... | Parameters>'.
        Type 'void' is not assignable to type 'Immunization | DomainResource | Bundle | Account | ActivityDefinition | AdverseEvent | AllergyIntolerance | Appointment | AppointmentResponse | AuditEvent | Basic | ... 106 more ... | Parameters'.

 22   const spy = jasmine.createSpyObj<ResourceService>('ResourceService', {
                                                                           ~
 23     createResource: undefined,
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...
 28     searchResource: new Promise(() => {}),
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 29   });
    ~~~

Security Policy violation Binary Artifacts

This issue was automatically created by Allstar.

Security Policy Violation
Project is out of compliance with Binary Artifacts policy: binaries present in source code

Rule Description
Binary Artifacts are an increased security risk in your repository. Binary artifacts cannot be reviewed, allowing the introduction of possibly obsolete or maliciously subverted executables. For more information see the Security Scorecards Documentation for Binary Artifacts.

Remediation Steps
To remediate, remove the generated executable artifacts from the repository.

Artifacts Found

  • ehr/hl7/message_converter/java/libs/libdatatypes-speed.jar
  • ehr/hl7/message_converter/java/libs/libextensions-speed.jar
  • ehr/hl7/message_converter/java/libs/libresources_proto-speed.jar
  • ehr/hl7/message_converter/java/libs/libstu3.jar

Additional Information
This policy is drawn from Security Scorecards, which is a tool that scores a project's adherence to security best practices. You may wish to run a Scorecards scan directly on this repository for more details.


Allstar has been installed on all Google managed GitHub orgs. Policies are gradually being rolled out and enforced by the GOSST and OSPO teams. Learn more at http://go/allstar

This issue will auto resolve when the policy is in compliance.

Issue created by Allstar. See https://github.com/ossf/allstar/ for more information. For questions specific to the repository, please contact the owner or maintainer.

var.composite_root_resources is null

Hi,

I have encoutnered this issue when deploy a forseti project.

Error: Invalid function argument

  on external/terraform_google_forseti/modules/server_config/main.tf line 31, in locals:
  31:   root_resource_id         = "root_resource_id: ${length(var.composite_root_resources) > 0 ? "\"\"" : var.folder_id != "" ? "folders/${var.folder_id}" : "organizations/${var.org_id}"}"
    |----------------
    | var.composite_root_resources is null

Invalid value for "value" parameter: argument must not be null.


Error: Invalid function argument

  on external/terraform_google_forseti/modules/server_config/main.tf line 32, in locals:
  32:   composite_root_resources = length(var.composite_root_resources) > 0 ? "composite_root_resources: [${join(", ", formatlist("\"%s\"", var.composite_root_resources))}]" : ""
    |----------------
    | var.composite_root_resources is null

Invalid value for "value" parameter: argument must not be null.

but I also noticed that in the config schema, you mentioned composite_root_resources should be unset.
Can I please have some help on what should I specify there? Thank you!

What is this repo vs. the https://github.com/google/fhir repo? and how to generate clients?

I can use artman to generate clients in googleapis repo but can't seem to do that in the google/fhir repo. What is this repo? How is all this related?

I am trying to get maven central artifacts for google healthcare api OR generate my own java objects but can't seem to find anything.

thanks for ANY pointers!!! I have been draining huge amounts of time today in this.

Getting "failed to apply forseti instance" while trying to deploy project_with_remote_audit_logs.yaml

Hello,

I am trying to deploy sample/deployment_manager/project_with_remote_audit_logs.yaml.
I have configured audit, forseti and app projects in the yaml. I have cloned the master branch.

When I run:
bazel run cmd/apply:apply -- --config_path=project_with_remote_audit_logs.yaml --dry_run --terraform_configs_dir=/tmp/dpt_output,

I get following error:

2020/01/15 21:07:15 Dry run call: terraform init 2020/01/15 21:07:15 Dry run call: terraform apply 2020/01/15 21:07:15 Dry run call: gcloud logging sinks describe audit-logs-to-bigquery --format json --project xyzsdkjsldkj-dev 2020/01/15 21:07:15 Applying Forseti instance in "xyzsdkjsldkj-dev" 2020/01/15 21:07:15 Running: [cp -r -L --no-preserve=mode,ownership ./external/terraform_google_forseti /tmp/dpt_output/xyzsdkjsldkj-dev/forseti/external] 2020/01/15 21:07:15 Failed to apply configs: failed to apply base projects: failed to apply forseti instance in "xyzsdkjsldkj-dev": failed to copy "./external/terraform_google_forseti" to "/tmp/dpt_output/xyzsdkjsldkj-dev/forseti/external": exit status 64: cp: illegal option -- - usage: cp [-R [-H | -L | -P]] [-fi | -n] [-apvXc] source_file target_file cp [-R [-H | -L | -P]] [-fi | -n] [-apvXc] source_file ... target_directory

I have masked the actual project_id above.
If I remove the forseti section from the config yaml, then I am able to do a dry run.

Could you please help me with what I could be doing wrong here?

Regards,
Giriraj

Dataflow Preprocess step from breast_density_cloud_ml.ipynb notebook failing

DEPRECATION: pip install --download has been deprecated and will be removed in the future. Pip now has a download command that should be used instead.
You are using pip version 9.0.3, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Traceback (most recent call last):
File "/usr/local/envs/py2env/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/usr/local/envs/py2env/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/content/datalab/healthcare/imaging/ml_codelab/scripts/preprocess/preprocess.py", line 362, in
main(sys.argv[1:])
File "/content/datalab/healthcare/imaging/ml_codelab/scripts/preprocess/preprocess.py", line 358, in main
run(arg_dict)
File "/content/datalab/healthcare/imaging/ml_codelab/scripts/preprocess/preprocess.py", line 275, in run
configure_pipeline(p, in_args)
File "/usr/local/envs/py2env/lib/python2.7/site-packages/apache_beam/pipeline.py", line 183, in exit
self.run().wait_until_finish()
File "/usr/local/envs/py2env/lib/python2.7/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 778, in wait_until_finish
(self.state, getattr(self._runner, 'last_error_msg', None)), self)
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 581, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 166, in execute
op.start()
File "dataflow_worker/operations.py", line 283, in dataflow_worker.operations.DoOperation.start (dataflow_worker/operations.c:10680)
def start(self):
File "dataflow_worker/operations.py", line 284, in dataflow_worker.operations.DoOperation.start (dataflow_worker/operations.c:10574)
with self.scoped_start_state:
File "dataflow_worker/operations.py", line 321, in dataflow_worker.operations.DoOperation.start (dataflow_worker/operations.c:10521)
self.dofn_runner.start()
File "apache_beam/runners/common.py", line 408, in apache_beam.runners.common.DoFnRunner.start (apache_beam/runners/common.c:11132)
self._invoke_bundle_method(self.do_fn_invoker.invoke_start_bundle)
File "apache_beam/runners/common.py", line 402, in apache_beam.runners.common.DoFnRunner._invoke_bundle_method (apache_beam/runners/common.c:10878)
self._reraise_augmented(exn)
File "apache_beam/runners/common.py", line 431, in apache_beam.runners.common.DoFnRunner._reraise_augmented (apache_beam/runners/common.c:11673)
raise new_exn, None, original_traceback
File "apache_beam/runners/common.py", line 400, in apache_beam.runners.common.DoFnRunner._invoke_bundle_method (apache_beam/runners/common.c:10788)
bundle_method()
File "apache_beam/runners/common.py", line 168, in apache_beam.runners.common.DoFnInvoker.invoke_start_bundle (apache_beam/runners/common.c:5594)
def invoke_start_bundle(self):
File "apache_beam/runners/common.py", line 172, in apache_beam.runners.common.DoFnInvoker.invoke_start_bundle (apache_beam/runners/common.c:5489)
self.signature.start_bundle_method.method_value())
File "/content/datalab/healthcare/imaging/ml_codelab/scripts/preprocess/preprocess.py", line 162, in start_bundle
File "/content/datalab/healthcare/imaging/ml_codelab/scripts/preprocess/preprocess.py", line 74, in init
File "/content/datalab/healthcare/imaging/ml_codelab/scripts/preprocess/preprocess.py", line 94, in _build_graph
File "/usr/local/lib/python2.7/dist-packages/scripts/ml_utils.py", line 21, in
import tensorflow_hub
File "/usr/local/lib/python2.7/dist-packages/tensorflow_hub/init.py", line 29, in
from tensorflow_hub.estimator import LatestModuleExporter
File "/usr/local/lib/python2.7/dist-packages/tensorflow_hub/estimator.py", line 25, in
from tensorflow_hub import tf_utils
File "/usr/local/lib/python2.7/dist-packages/tensorflow_hub/tf_utils.py", line 28, in
from tensorflow_hub import tf_v1
ImportError: cannot import name tf_v1 [while running 'Preprocess Image']

Healthcare API doesn't have required Cloud Storage permission Error

In healthcare/imaging/ml/ml_codelab/breast_density_auto_ml.ipynb, below step is failing.

%%bash -s {jpeg_folder} {project_id} {location} {dataset_id} {dicom_store_id}
gcloud beta healthcare --project $2 dicom-stores export gcs $5 --location=$3 --dataset=$4 --mime-type="image/jpeg; transfer-syntax=1.2.840.10008.1.2.4.50" --gcs-uri-prefix=$1

Error msg:
Healthcare API doesn't have required Cloud Storage permission. See https://cloud.google.com/healthcare/docs/how-tos/permissions-healthcare-api-gcp-products for more information

breast_density_auto_ml.ipynb failing while building Inference_Module

This below step from the notebook is failing
gcloud builds submit --config scripts/inference/cloudbuild.yaml --timeout 1h scripts/inference

with the error msg:

Step 4/5 : RUN pip install --upgrade pip && pip install --upgrade virtualenv && virtualenv /opt/inference_module/venv && . /opt/inference_module/venv/bin/activate && cd /opt/inference_module/src/ && python setup.py install
---> Running in 43cb51c7bca3
Collecting pip
Downloading https://files.pythonhosted.org/packages/00/b6/9cfa56b4081ad13874b0c6f96af8ce16cfbc1cb06bedf8e9164ce5551ec1/pip-19.3.1-py2.py3-none-any.whl (1.4MB)
Installing collected packages: pip
Found existing installation: pip 18.1
Not uninstalling pip at /usr/lib/python2.7/dist-packages, outside environment /usr
Can't uninstall 'pip'. No files were found to uninstall.
Successfully installed pip-19.3.1
Traceback (most recent call last):
File "/usr/bin/pip", line 11, in
sys.exit(main())
TypeError: 'module' object is not callable
The command '/bin/sh -c pip install --upgrade pip && pip install --upgrade virtualenv && virtualenv /opt/inference_module/venv && . /opt/inference_module/venv/bin/activate && cd /opt/inference_module/src/ && python setup.py install' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1

"Kubernetes Cluster and a Deployment for the inference module" step is failing in breast_density_auto_ml.ipynb

Its failing with the below error:
I believe the failure is due to the location in the notebook is defined as = "us-central1".
But for Kubernetes Cluster creation, you would have to select region and zone like "us-central1-a" which is not acceptable in Healthcare APIs since currently healthcare dataset can be created only in a region not in a region/zone.

WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the --no-enable-autoupgrade flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the --[no-]enable-basic-auth flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the --[no-]issue-client-certificate flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using --no-enable-ip-alias flag. Use --[no-]enable-ip-alias flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run clusters create with the flag --metadata disable-legacy-endpoints=true.
WARNING: Your Pod address range (--cluster-ipv4-cidr) can accommodate at most 1008 node(s).
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Permission denied on 'locations/{location}' (or it may not exist).
Unable to connect to the server: dial tcp 104.154.119.220:443: i/o timeout

Facing error "socket hang up" while calling dicomWebStoreInstance through Firebase cloud functions

Hello,

I am trying to run DICOMweb healthcare API for dicomWebStoreInstance using Firebase cloud functions, it's throwing the following error:

request to https://healthcare.googleapis.com/v1beta1/projects/........../dicomWeb/studies failed, reason: socket hang up

For the same I increased timeout period upto 5 min, still got the same error.

Also, I have tried to call the same API with client-server approach in my local system, it works successfully with proper response.

Can anyone suggest why this API is not working with the Firebase cloud functions.
Any help regarding this would be greatly appreciated!

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.