Git Product home page Git Product logo

mlab-vis-api's Introduction

MLAB VIS API

What

Python Flask server connected to Bigtable to serve up data needed for MLab Visualization. This is a Python 2.* application. You can run this application locally with Docker.

Install

Clone

There are githooks and travis files setup in this repository. You can either clone this repo with the --recursive flag to fetch them like so: git clone --recursive <...> or you can run git submodule update --init after a basic clone.

Prepare Bigtable configuration files

The bigtable configuration files that are used in the mlab-vis-pipeline. These file are used to create the bigtable tables AND determine the correct query format within this application. For that purpose, we copy them from that repo here. The make script assumes you have mlab-vis-pipeline checked out in the same parent folder.

Run make prepare to copy over necessary files.

Bring over credential file

In order to access the bigtable tables in the desired environment (staging, production or sandbox), you need to use a service account that you can authenticate with. You should recieve a credential json file from an m-lab team member or setup your own. Create a folder outside of the root of this repository where you will store these files.

Build docker image

docker build -t data-api .

Run Docker Image

You can run a local server like so:

docker run -p 8080:8080 \
-e KEY_FILE=/keys/<keyname>.json \
-v <local folder containing your secret keys>:/keys \
-e API_MODE=production|staging|sandbox data-api

Note that you need to create a mapping to the folder containing your keys. For example, if my local keys live in ~/dev/mlab-keys and the file name for the production environment is production-key.json this command would look like:

docker run -p 8080:8080 \
-e KEY_FILE=/keys/production-key.json \
-v /Users/iros/dev/keys:/keys \
-e API_MODE=production|staging|sandbox data-api

The environment you choose to run in needs to have the appropriate service key available as a json file.

Once the docker image is running, you should be able to access it at http://localhost:8080.

The API_MODE flag will also choose one of the environments/* files to run. Check those vars to ensure they match your expected settings, but they should be generic enough to match all environments.

Deploy

We use a flexible App Engine deployment. Currently, gcloud app deploy is used to deploy internally. Ensure you have this tool installed and configured properly.

First, switch your selected gcloud project to the one that matches your credentials. It could be: mlab-oti or mlab-staging etc. This will be used to ensure you can deploy to the appropriate project.

To deploy to app engine, run this simple command: KEY_FILE=<absolute path to your cred file> ./deploy.sh production|staging|sandbox

The app will be deployed and accessible from the service URL which depends on the environment. In production this URL is:

https://data-api.measurementlab.net/

The API is documented at this url as well.

Testing

You can build a test docker container by calling:

docker build -f TestDockerFile -t data-api-test .

Note that you need to have built your data-api container at least once for this to work, since it uses it as a baseline.

To run the container you can call:

docker run \
-e KEY_FILE=/keys/<keyname>.json \
-v <local folder containing your secret keys>:/keys \
-e API_MODE=production|staging|sandbox data-api-test

Note this is a similar call except we don't pass in a port, since we aren't running a web server for testing purposes. You still need to pass in a key for the environment you're testing against, but the default tests have been written against production data (we should really refactor them.)

Note, you should also lint your code before you consider PRing it. You can do so by calling ./lint.sh.

Code

This code depends heavily on the Flask-RESTPlus package.

It uses the google api python client for communicating with BigTable.

Troubleshooting

If you are getting errors about your inability to authenticate with the Google Cloud Services such as:

ERROR: (gcloud.auth.activate-service-account) Invalid value for [ACCOUNT]: The
given account name does not match the account name in the key file.  This
argument can be omitted when using .json keys.

Try to authenticate the service account required. You can do this step after calling make prepare.

gcloud auth activate-service-account <service acccount email address> --key-file <cred file>.json

mlab-vis-api's People

Contributors

dependabot[bot] avatar iros avatar pbeshai avatar vlandham avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mlab-vis-api's Issues

Metrics are being rounded to nearest integer, but should be floats.

Tested on server metrics

Example

Level3
http://mlab-api-dot-mlab-oti.appspot.com/servers/AS3356/metrics?timebin=day_hour

{
  "meta": {
    "server_asn_name": "Level 3 Communications, Inc.",
    "id": "AS3356",
    "server_asn_number": "AS3356"
  },
  "results": [
    {
      "count": 282,
      "upload_speed_mbps_median": 1,
      "hour": "00",
      "rtt_avg": 78.562,
      "retransmit_avg": 0,
      "download_speed_mbps_median": 6,
      "date": "2015-10-01"
    },
    {
      "count": 189,
      "upload_speed_mbps_median": 1,
      "hour": "01",
      "rtt_avg": 84.615,
      "retransmit_avg": 0,
      "download_speed_mbps_median": 7,
      "date": "2015-10-01"
    },
    {
      "count": 112,
      "upload_speed_mbps_median": 1,
      "hour": "02",
      "rtt_avg": 45.449,
      "retransmit_avg": 0,
      "download_speed_mbps_median": 7,
      "date": "2015-10-01"
    },
    {
      "count": 107,
      "upload_speed_mbps_median": 1,
      "hour": "03",
      "rtt_avg": 103.015,
      "retransmit_avg": 0,
      "download_speed_mbps_median": 14,
      "date": "2015-10-01"
    },
...

Client ASN search causes internal server error

This happens when searching for "com"
http://mlab-api-dot-mlab-oti.appspot.com/client_asns/search/com

Internal Server Error

The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

This works though:
http://mlab-api-dot-mlab-oti.appspot.com/client_asns/search/comc

{"results": [{"meta": {"client_asn_name_lookup": "comclarknetworktechnologycorp", "client_asn_number": "AS17639", "client_asn_name": "ComClark Network & Technology Corp."}, "data": {"last_three_month_test_count": 413, "test_count": 22741}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS7922", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": 507837, "test_count": 7277}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33662", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": 145, "test_count": 5321}}, {"meta": {"client_asn_name_lookup": "comcomsysas", "client_asn_number": "AS42526", "client_asn_name": "COMCOMSYS-AS"}, "data": {"last_three_month_test_count": 21, "test_count": 1975}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsholdingsinc", "client_asn_number": "AS13367", "client_asn_name": "Comcast Cable Communications Holdings, Inc"}, "data": {"last_three_month_test_count": 78, "test_count": 1687}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33650", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": 1, "test_count": 1168}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsholdingsinc", "client_asn_number": "AS22258", "client_asn_name": "Comcast Cable Communications Holdings, Inc"}, "data": {"last_three_month_test_count": 20, "test_count": 1010}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsholdingsinc", "client_asn_number": "AS7015", "client_asn_name": "Comcast Cable Communications Holdings, Inc"}, "data": {"last_three_month_test_count": 33, "test_count": 994}}, {"meta": {"client_asn_name_lookup": "comcasttelecommunicationsinc", "client_asn_number": "AS13385", "client_asn_name": "Comcast Telecommunications, Inc."}, "data": {"last_three_month_test_count": 2, "test_count": 978}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33651", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": null, "test_count": 581}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33657", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": 5, "test_count": 471}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsholdingsinc", "client_asn_number": "AS7016", "client_asn_name": "Comcast Cable Communications Holdings, Inc"}, "data": {"last_three_month_test_count": 2, "test_count": 378}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsholdingsinc", "client_asn_number": "AS21508", "client_asn_name": "Comcast Cable Communications Holdings, Inc"}, "data": {"last_three_month_test_count": null, "test_count": 324}}, {"meta": {"client_asn_name_lookup": "comcorserviceas", "client_asn_number": "AS39315", "client_asn_name": "COMCOR-SERVICE-AS"}, "data": {"last_three_month_test_count": 4, "test_count": 220}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33491", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": 1, "test_count": 195}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33287", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": null, "test_count": 135}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33490", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": 2, "test_count": 122}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33668", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": null, "test_count": 112}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS22909", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": null, "test_count": 51}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33659", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": null, "test_count": 34}}, {"meta": {"client_asn_name_lookup": "comcanadainc", "client_asn_number": "AS14651", "client_asn_name": "COM Canada Inc."}, "data": {"last_three_month_test_count": null, "test_count": 28}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsholdingsinc", "client_asn_number": "AS7725", "client_asn_name": "Comcast Cable Communications Holdings, Inc"}, "data": {"last_three_month_test_count": null, "test_count": 26}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33660", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": null, "test_count": 16}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33652", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": null, "test_count": 12}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33667", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": 1, "test_count": 12}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsinc", "client_asn_number": "AS33661", "client_asn_name": "Comcast Cable Communications, Inc."}, "data": {"last_three_month_test_count": null, "test_count": 6}}, {"meta": {"client_asn_name_lookup": "comcastcablecommunicationsholdingsinc", "client_asn_number": "AS20214", "client_asn_name": "Comcast Cable Communications Holdings, Inc"}, "data": {"last_three_month_test_count": null, "test_count": 2}}]}

Support CSV format

Goal: Be able to specify data format in the URL, not just with HTTP headers, but setting it via HTTP headers should work too.

I've begun investigating getting the data in CSV format. I think I've hit a limitation in flask_restplus.

It will work fine if the Accept header is set to text/csv and you supply a function that converts model marshaled data to CSV via @api.representation('text/csv') For example:

@api.representation('text/csv')
def csv_mediatype_representation(data, code, headers):
    resp = make_response(convert_to_csv(data), code)
    resp.headers.extend(headers)
    return resp

flask_restplus already comes with default support for application/json.

This only becomes a problem when you want to have the data format specified by a URL parameter instead of an HTTP header. e.g. /locations/nauswaseattle/metrics?timebin=month&format=csv. The way flask_restplus is coded makes it so whatever the matched mediatype is, that is what the Content-Type header on the response will be set to. This means the best we could do is provide Content-Type: text/plain for all our responses and just have the content vary. Not ideal -- JSON data should come back with Content-Type: application/json and CSV data with Content-Type: text/csv.

This is the chunk of code in flask_restplus that limits us: https://github.com/noirbizarre/flask-restplus/blob/master/flask_restplus/api.py#L337-L340. It sets the content type then the response is sent out via flask with no way to intercept.

So it seems the only way to really support it is to use Content-Type: text/plain for everything.

I did have another idea though: proxying different format requests. Basically instead of passing format as a query parameter, we would use it as part of the URL to match then do a proxy that sets the appropriate header. The problem we're trying to solve is having a URL that will return the data with the correct type when shared/put in a link tag. So you could have:

  • /locations/nauswaseattle/metrics?timebin=month -- returns formatted based on Accept header (defaults to application/json)
  • /csv/locations/nauswaseattle/metrics?timebin=month -- proxies a call to /locations/nauswaseattle/metrics?timebin=month with Accept: text/csv
  • /json/locations/nauswaseattle/metrics?timebin=month -- proxies a call to /locations/nauswaseattle/metrics?timebin=month with Accept: application/json (not necessary, but for consistency)

Kind of ugly way to work around the limitation of the flask_restplus. Thoughts @vlandham?


Update

After @vlandham's suggestion, I have managed to get this working using a decorator applied to all resources of the API via restplus' decorators parameter. The following endpoints have been updated to support CSV:

Locations

  • /locations/search
  • /locations/top
  • /locations/id/info
  • /locations/id/children
  • /locations/id/clients
  • /locations/id/servers
  • /locations/id/clients/id/info
  • /locations/id/metrics
  • /locations/id/clients/id/metrics
  • /locations/id/servers/id/metrics
  • /locations/id/clients/id/servers/id/metrics

Clients

  • /clients/search
  • /clients/top
  • /clients/id/info
  • /clients/id/servers
  • /clients/id/locations
  • /clients/id/metrics
  • /clients/id/servers/id/metrics

Servers

  • /servers/search
  • /servers/top
  • /servers/id/info
  • /servers/id/clients
  • /servers/id/locations
  • /servers/id/metrics

Improve stats profiling of API

Want to monitor, among other things:

  • Bigtable query avg/min/med/max time taken by API query
  • total API time taken by API query
  • payload size by API query

Optimally a graphite-like dashboard hosted on google cloud would be great.

implement facet search

Power tool component of client requires more sophisticated search capabilities.

Current strategy for dealing these searches:

There are 3 facets to orient search around:

  • location
  • client isp
  • transit (server) isp

Each facet allows for searching of all 3 entities but with different constraints.

  • location facet allows for searching of client isp and transit isp limiting the results of these searches to only include client isps and transit isps found in this location
  • client isp likewise allows searching for location and transit isp with locations only listing locations the selected client isps are active in. And only transit isps with data coming from the selected client isp(s)
  • transit isp allows searching for location and client isp. This is the location from data from which client isps come from that terminate with this transit isp. So this is still client location. Also, client isps are filtered by showing only client isps that have data terminating with this transit isp.

Filters

With the facets Here are the different entity/filtering combinations we have:

  • location with no filters
  • location filter by client isp
  • location filter by transit isp
  • client isp with no filters
  • client isp filter by location
  • client isp filter by transit isp
  • transit isp with no filters
  • transit isp filter by location
  • transit isp filter by client isp

A tradeoff with this approach limits us to filtering by a single entity. This works well with our facet type UI - but means that client isp, for example, could only be filtered by location - not location and transit isp

Possible Implementation

We have an existing solution to filtering by one entity in place:

client_location_client_asn is a table with a compound key of 2 values:

  • location_key
  • client_asn_number

This provides a solution for filtering a client isp by a location.
If we expand this solution to include the additional filtering we need, it would involve making these tables:

  • client_location_search ✔️
    • keys: reverse_location_key
  • client_location_client_asn ✔️
    • keys: location_key, client_asn_num
  • client_location_transit_asn
    • keys: location_key, transit_asn_num
  • client_asn_search ✔️
    • keys: client_asn_name_lookup
  • client_asn_client_loctation
    • keys: client_asn_num, client_location
  • client_asn_transit_asn
  • transit_asn_search
  • transit_asn_client_location
  • transit_asn_client_asn

where ✔️ indicates already existing tables.

Our current implementation of location_search only provides starts with name matching - due to bigtable query capabilities. This is a limitation of client_asn_search This would probably be a limitation for transit_asn_search as well.

For filtering. the keys are all built from asn keys and location keys. So the easiest thing to do would be to pass all results matching a filter (for example, all transit isps with client location of 'new york') to the api. If the resulting dataset was large, further filtering could occur at the api level (example: only pull out values with name containing 'lev').

The UI has a need for allowing multiple values in the initial filter. We will use these as union values for the filterable dependent entities.

For example. On facet by location, new york and london are selected. Now the user is selecting a client_isp. The API in this scenario must return client isps that appear in new york or london.

This could be implemented at the API level by acquiring both client isp lists from bigtable, and then performing the union there.

Metrics searches that return no data also return no meta information

It would be ideal to have meta information returned. If this isn't possible, I can work around it on the front-end (I have that information elsewhere in the store).

Example

Seattle + AS22773 + AS174 - no meta besides id

http://mlab-api-dot-mlab-oti.appspot.com/locations/nauswaseattle/clients/AS22773/servers/AS174/metrics?startdate=2015-10-01&enddate=2015-11-01&timebin=day
meta:

{ "id": "nauswaseattle_AS22773_AS174" }

Kansas + AS22773 + AS174 - full meta

http://mlab-api-dot-mlab-oti.appspot.com/locations/nausks/clients/AS22773/servers/AS174/metrics?startdate=2015-10-01&enddate=2015-11-01&timebin=day
meta:

{
  "client_country": "United States",
  "client_region": "Kansas",
  "local_zone_name": "America/Chicago",
  "client_continent": "North America",
  "client_country_code": "US",
  "client_asn_name": "Cox Communications Inc.",
  "server_asn_name": "Cogent Communications",
  "client_continent_code": "NA",
  "local_time_zone": "CDT",
  "client_region_code": "KS",
  "client_asn_number": "AS22773",
  "id": "nausks_AS22773_AS174",
  "client_location_key": "nausks",
  "server_asn_number": "AS174"
}

Search for New York city doesn't return it

http://localhost:8080/locations/search?q=newyork

{
  "results": [
    {
      "data": {},
      "meta": {
        "client_city": "New York",
        "client_continent": "North America",
        "client_continent_code": "NA",
        "client_country": "United States",
        "client_country_code": "US",
        "client_region": "New York",
        "client_region_code": "NY",
        "id": "nausny",
        "location_key": "nausny",
        "test_count": 6521672,
        "type": "region"
      }
    },
    {
      "data": {},
      "meta": {
        "client_city": "New York",
        "client_continent": "Europe",
        "client_continent_code": "EU",
        "client_country": "United Kingdom",
        "client_country_code": "GB",
        "client_region": "Lincolnshire",
        "client_region_code": "H7",
        "id": "eugbh7newyork",
        "location_key": "eugbh7newyork",
        "test_count": 1136,
        "type": "city"
      }
    },
    {
      "data": {},
      "meta": {
        "client_city": "New York Mills",
        "client_continent": "North America",
        "client_continent_code": "NA",
        "client_country": "United States",
        "client_country_code": "US",
        "client_region": "Minnesota",
        "client_region_code": "MN",
        "id": "nausmnnewyorkmills",
        "location_key": "nausmnnewyorkmills",
        "test_count": 258,
        "type": "city"
      }
    },
    {
      "data": {},
      "meta": {
        "client_city": "New York Mills",
        "client_continent": "North America",
        "client_continent_code": "NA",
        "client_country": "United States",
        "client_country_code": "US",
        "client_region": "New York",
        "client_region_code": "NY",
        "id": "nausnynewyorkmills",
        "location_key": "nausnynewyorkmills",
        "test_count": 123,
        "type": "city"
      }
    },
    {
      "data": {},
      "meta": {
        "client_city": "New York",
        "client_continent": "North America",
        "client_continent_code": "NA",
        "client_country": "Mexico",
        "client_country_code": "MX",
        "client_region": "Chiapas",
        "client_region_code": "05",
        "id": "namx05newyork",
        "location_key": "namx05newyork",
        "test_count": 84,
        "type": "city"
      }
    }
  ]
}

Add in generic `id` field to data objects

Locations should have location key as their id (e.g. nausmacambridge)
Client ISPs should have their asn_number as their id (e.g. AS3215)
Transit ISPs should have their keyed name as their id (e.g. ghanaixp)

This could happen at the API level or at the client transform level. Probably best at the API level. It could even happen at the pipeline level!

debugging deployment issue

When attempting to update the appengine service using the deploy instructions in the readme, receiving errors that need to be debugged. Posting here for reference and in case others have thoughts. Below are the error results of a recent attempt (cloudbuilder logs):

...
Updating service [data-api] (this may take several minutes)...failed.          
ERROR: (gcloud.app.deploy) Error Response: [9] 
Application startup error:
+ [[ sandbox == production ]]
+ [[ sandbox == staging ]]
+ [[ sandbox == sandbox ]]
+ source ./environments/sandbox.sh
++ BIGTABLE_INSTANCE=viz-pipeline
++ PROJECT=mlab-sandbox
++ API_MODE=sandbox
++ BIGTABLE_POOL_SIZE=10
+ BIGTABLE_CONFIG_DIR=bigtable_configs
+ BIGTABLE_INSTANCE=viz-pipeline
+ PROJECT=mlab-sandbox
+ API_MODE=sandbox
+ GOOGLE_APPLICATION_CREDENTIALS=cred.json
+ BIGTABLE_POOL_SIZE=10
+ gunicorn --timeout=1480 -b :8080 main:app
[2019-03-05 17:55:52 +0000] [7] [INFO] Starting gunicorn 19.6.0
[2019-03-05 17:55:52 +0000] [7] [INFO] Listening at: http://0.0.0.0:8080 (7)
[2019-03-05 17:55:52 +0000] [7] [INFO] Using worker: sync
[2019-03-05 17:55:52 +0000] [11] [INFO] Booting worker with pid: 11
[2019-03-05 17:55:53 +0000] [11] [ERROR] Exception in worker process
Traceback (most recent call last):
  File "/env/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 557, in spawn_worker
    worker.init_process()
  File "/env/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 126, in init_process
    self.load_wsgi()
  File "/env/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 136, in load_wsgi
    self.wsgi = self.app.wsgi()
  File "/env/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
    self.callable = self.load()
  File "/env/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
    return self.load_wsgiapp()
  File "/env/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
    return util.import_app(self.app_uri)
  File "/env/local/lib/python2.7/site-packages/gunicorn/util.py", line 357, in import_app
    __import__(module)
  File "/mlab-vis-api/main.py", line 15, in <module>
    from mlab_api.endpoints.locations import LOCATIONS_NS
  File "/mlab-vis-api/mlab_api/endpoints/locations.py", line 11, in <module>
    from mlab_api.data.data import LOCATION_DATA as DATA
  File "/mlab-vis-api/mlab_api/data/data.py", line 8, in <module>
    from mlab_api.data.location_data import LocationData
  File "/mlab-vis-api/mlab_api/data/location_data.py", line 5, in <module>
    from mlab_api.data.base_data import Data
  File "/mlab-vis-api/mlab_api/data/base_data.py", line 10, in <module>
    import mlab_api.data.bigtable_utils as bt
  File "/mlab-vis-api/mlab_api/data/bigtable_utils.py", line 11, in <module>
    from google.cloud import bigtable
  File "/env/local/lib/python2.7/site-packages/google/cloud/bigtable/__init__.py", line 21, in <module>
    from google.cloud.bigtable.client import Client
  File "/env/local/lib/python2.7/site-packages/google/cloud/bigtable/client.py", line 34, in <module>
    from google.gax.utils import metrics
  File "/env/local/lib/python2.7/site-packages/google/gax/__init__.py", line 44, in <module>
    from google.rpc import code_pb2
  File "/env/local/lib/python2.7/site-packages/google/rpc/code_pb2.py", line 23, in <module>
    serialized_pb=_b('\n\x15google/rpc/code.proto\x12\ngoogle.rpc*\xb7\x02\n\x04\x43ode\x12\x06\n\x02OK\x10\x00\x12\r\n\tCANCELLED\x10\x01\x12\x0b\n\x07UNKNOWN\x10\x02\x12\x14\n\x10INVALID_ARGUMENT\x10\x03\x12\x15\n\x11\x44\x45\x41\x44LINE_EXCEEDED\x10\x04\x12\r\n\tNOT_FOUND\x10\x05\x12\x12\n\x0e\x41LREADY_EXISTS\x10\x06\x12\x15\n\x11PERMISSION_DENIED\x10\x07\x12\x13\n\x0fUNAUTHENTICATED\x10\x10\x12\x16\n\x12RESOURCE_EXHAUSTED\x10\x08\x12\x17\n\x13\x46\x41ILED_PRECONDITION\x10\t\x12\x0b\n\x07\x41\x42ORTED\x10\n\x12\x10\n\x0cOUT_OF_RANGE\x10\x0b\x12\x11\n\rUNIMPLEMENTED\x10\x0c\x12\x0c\n\x08INTERNAL\x10\r\x12\x0f\n\x0bUNAVAILABLE\x10\x0e\x12\r\n\tDATA_LOSS\x10\x0f\x42X\n\x0e\x63om.google.rpcB\tCodeProtoP\x01Z3google.golang.org/genproto/googleapis/rpc/code;code\xa2\x02\x03RPCb\x06proto3')
TypeError: __new__() got an unexpected keyword argument 'serialized_options'
[2019-03-05 17:55:53 +0000] [11] [INFO] Worker exiting (pid: 11)
[2019-03-05 17:55:53 +0000] [7] [INFO] Shutting down: Master
[2019-03-05 17:55:53 +0000] [7] [INFO] Reason: Worker failed to boot.

When deployed, turn off debug mode

Debug mode adds a bunch of whitespace to the JSON responses, which is undesired in production.

See main.py

if __name__ == '__main__':
    app.run(port=8080, debug=True)

Add prometheus tracking

To start, track failing requests and data download requests.
Later, would be great to separate vis client requests from external client requests and track those (otherwise we're effectively replicating google analytics...)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.