Git Product home page Git Product logo

gnocchi's People

Contributors

aalvrz avatar asu4ni avatar atmalagon avatar cdent avatar chenchaozhe1988 avatar chungg avatar gentux avatar javacruft avatar jd avatar jizilian avatar kajinamit avatar larsks avatar lianhao avatar luogangyi avatar mergify[bot] avatar mrunge avatar openvdro avatar pedro-martins avatar pkilambi avatar rafaelweingartner avatar sheeprine avatar sileht avatar stephenfin avatar sum12 avatar tobias-urdin avatar waipeng avatar whyliyi avatar yprokule avatar zhang-shengping avatar zqfan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gnocchi's Issues

when running gnocchi-config-generator there are missing depends

Hi,

I have installed gnocchi 4.0.0 on ubuntu 17.04 (x64) via pip install gnocchi. It is running on python 2.7.13 in a virtualenv (15.1.0).

When executing gnocchi-config-generator the following modules are missing:
futurist,
tooz,
oslo_db
lz4

I have managed to install them manually and got gnocchi-config-generator executing, but I would argue for a smooth start the dependencies should be included into the requirements.txt or the setuptools config or whereever...

Cheers
Carsten

ambiguous archive policy matching if duplicate patterns

we allow archive policy rule pattern to be duplicated. this creates ambiguity as we have no idea which rule it will pick... currently it matches with whatever order sql returns back (i assume create time).

we should not let duplicate archive rules with same pattern OR we need to allow it to be scope to project (and still not let it have same pattern)

Creation of archive_policy with points:0 should NOT be allowed

gnocchi archive-policy create test1 -d granularity:5m,timespan:45 -f json

creates a policy with points:0
causing the gnocchi archive_policy list to fail with 500 and the following traceback:

Traceback (most recent call last):
  File "/usr/lib64/python2.7/wsgiref/handlers.py", line 85, in run
    self.result = application(self.environ, self.start_response)
  File "/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
    resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
    return self.func(req, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/oslo_middleware/base.py", line 126, in __call__
    response = req.get_response(self.application)
  File "/usr/lib/python2.7/site-packages/webob/request.py", line 1299, in send
    application, catch_exc_info=False)
  File "/usr/lib/python2.7/site-packages/webob/request.py", line 1263, in call_application
    app_iter = application(self.environ, start_response)
  File "/usr/lib/python2.7/site-packages/paste/urlmap.py", line 216, in __call__
    return app(environ, start_response)
  File "/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
    resp = self.call_func(req, *args, **self.kwargs)
  File "/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
    return self.func(req, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/oslo_middleware/base.py", line 126, in __call__
    response = req.get_response(self.application)
  File "/usr/lib/python2.7/site-packages/webob/request.py", line 1299, in send
    application, catch_exc_info=False)
  File "/usr/lib/python2.7/site-packages/webob/request.py", line 1263, in call_application
    app_iter = application(self.environ, start_response)
  File "/usr/lib/python2.7/site-packages/webob/exc.py", line 1162, in __call__
    return self.application(environ, start_response)
  File "/usr/lib/python2.7/site-packages/gnocchi/rest/app.py", line 68, in __call__
    return self.app(environ, start_response)
  File "/usr/lib/python2.7/site-packages/pecan/middleware/recursive.py", line 56, in __call__
    return self.application(environ, start_response)
  File "/usr/lib/python2.7/site-packages/pecan/core.py", line 840, in __call__
    return super(Pecan, self).__call__(environ, start_response)
  File "/usr/lib/python2.7/site-packages/pecan/core.py", line 683, in __call__
    self.invoke_controller(controller, args, kwargs, state)
  File "/usr/lib/python2.7/site-packages/pecan/core.py", line 574, in invoke_controller
    result = controller(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/gnocchi/rest/__init__.py", line 354, in get_all
    return pecan.request.indexer.list_archive_policies()
  File "/usr/lib/python2.7/site-packages/gnocchi/indexer/sqlalchemy.py", line 559, in list_archive_policies
    return list(session.query(ArchivePolicy).all())
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2613, in all
    return list(self)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py", line 86, in instances
    util.raise_from_cause(err)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 202, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py", line 71, in instances
    rows = [proc(row) for row in fetch]
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py", line 428, in _instance
    loaded_instance, populate_existing, populators)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py", line 486, in _populate_full
    dict_[key] = getter(row)
  File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/type_api.py", line 1030, in process
    return process_value(value, dialect)
  File "/usr/lib/python2.7/site-packages/gnocchi/indexer/sqlalchemy_base.py", line 113, in process_result_value
    return [archive_policy.ArchivePolicyItem(**v) for v in values]
  File "/usr/lib/python2.7/site-packages/gnocchi/archive_policy.py", line 163, in __init__
    raise ValueError("Number of points should be > 0")
ValueError: Number of points should be > 0

redis: pipelining batch posts

redis pipelines adds ability to execute multiple actions in a single roundtrip request. we should leverage this to minimise number of individual requests:

  • batch measures post
  • computing report

Unable to install from tarball

While trying to install gnocchi from tarball build in github https://github.com/gnocchixyz/gnocchi/archive/master.tar.gz

Gives pbr errors:

INFO:kolla.image.build.gnocchi-base:Processing /gnocchi
INFO:kolla.image.build.gnocchi-base:    Complete output from command python setup.py egg_info:
INFO:kolla.image.build.gnocchi-base:    ERROR:root:Error parsing
INFO:kolla.image.build.gnocchi-base:    Traceback (most recent call last):
INFO:kolla.image.build.gnocchi-base:      File "/var/lib/kolla/venv/lib/python2.7/site-packages/pbr/core.py", line 111, in pbr
INFO:kolla.image.build.gnocchi-base:        attrs = util.cfg_to_args(path, dist.script_args)
INFO:kolla.image.build.gnocchi-base:      File "/var/lib/kolla/venv/lib/python2.7/site-packages/pbr/util.py", line 249, in cfg_to_args
INFO:kolla.image.build.gnocchi-base:        pbr.hooks.setup_hook(config)
INFO:kolla.image.build.gnocchi-base:      File "/var/lib/kolla/venv/lib/python2.7/site-packages/pbr/hooks/__init__.py", line 25, in setup_hook
INFO:kolla.image.build.gnocchi-base:        metadata_config.run()
INFO:kolla.image.build.gnocchi-base:      File "/var/lib/kolla/venv/lib/python2.7/site-packages/pbr/hooks/base.py", line 27, in run
INFO:kolla.image.build.gnocchi-base:        self.hook()
INFO:kolla.image.build.gnocchi-base:      File "/var/lib/kolla/venv/lib/python2.7/site-packages/pbr/hooks/metadata.py", line 26, in hook
INFO:kolla.image.build.gnocchi-base:        self.config['name'], self.config.get('version', None))
INFO:kolla.image.build.gnocchi-base:      File "/var/lib/kolla/venv/lib/python2.7/site-packages/pbr/packaging.py", line 755, in get_version
INFO:kolla.image.build.gnocchi-base:        name=package_name))
INFO:kolla.image.build.gnocchi-base:    Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name gnocchi was given, but was not able to be found.
INFO:kolla.image.build.gnocchi-base:    error in setup command: Error parsing /tmp/pip-QCtHsN-build/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name gnocchi was given, but was not able to be found.

Executing pbr versioning inside downloaded tarball gives same error.
python -c "import pbr.version; print(pbr.version.VersionInfo('gnocchi'))"

gnocchi-upgrade INFO outputs is barely useful

+ gnocchi-upgrade
2017-06-15 12:46:00,338 [8] INFO gnocchi.cli: Upgrading indexer <gnocchi.indexer.sqlalchemy.SQLAlchemyIndexer object at 0x7f2b68768b70>
2017-06-15 12:46:00,548 [8] INFO gnocchi.cli: Upgrading storage <gnocchi.storage.file.FileStorage object at 0x7f2b15e41d30>

would be better to have real info. Let's implement a real __str__ on those storage classes.

Measure interval become random after some time .

Hi !

I am using gnocchi 3.1 with ceilometer. When I ran openstack after a few days , the interval of measures become random.

such as :

| 2017-07-19T22:56:00+00:00 |        60.0 | 0.0997658163737 |
| 2017-07-19T23:00:00+00:00 |        60.0 | 0.0945002171601 |
| 2017-07-19T23:01:00+00:00 |        60.0 | 0.0917278430228 |
| 2017-07-19T23:03:00+00:00 |        60.0 |  0.115747232384 |
| 2017-07-19T23:11:00+00:00 |        60.0 |  0.100336360927 |
| 2017-07-19T23:13:00+00:00 |        60.0 | 0.0999616047476 |
| 2017-07-19T23:19:00+00:00 |        60.0 |  0.104330353166 |
| 2017-07-19T23:23:00+00:00 |        60.0 | 0.0958292853446 |
| 2017-07-19T23:49:00+00:00 |        60.0 | 0.0973490111969 |
| 2017-07-19T23:52:00+00:00 |        60.0 |  0.099942496419 |
| 2017-07-19T23:55:00+00:00 |        60.0 | 0.0966297611841 |
| 2017-07-20T00:06:00+00:00 |        60.0 | 0.0998555805381 |
| 2017-07-20T00:10:00+00:00 |        60.0 | 0.0915077634355 |
| 2017-07-20T00:13:00+00:00 |        60.0 | 0.0997182204077 |
| 2017-07-20T00:28:00+00:00 |        60.0 | 0.0996600330513 |
| 2017-07-20T00:42:00+00:00 |        60.0 | 0.0999622870058 |
| 2017-07-20T00:51:00+00:00 |        60.0 |  0.099988610464 |
| 2017-07-20T00:52:00+00:00 |        60.0 |  0.092102811857 |
| 2017-07-20T01:12:00+00:00 |        60.0 |  0.108646466937 |
| 2017-07-20T01:13:00+00:00 |        60.0 | 0.0827301587764 |
| 2017-07-20T01:22:00+00:00 |        60.0 | 0.0999543758252 |
| 2017-07-20T01:30:00+00:00 |        60.0 | 0.0917056522679 |
| 2017-07-20T01:32:00+00:00 |        60.0 | 0.0959137146212 |
+---------------------------+-------------+-----------------+

I am sure that , the data I collect is restrictly collected by every 60 seconds.

my gnocchi config is :

[database]
backend = sqlalchemy
[indexer]
url = mysql://gnocchi:[email protected]:3311/gnocchi

[keystone_authtoken]
auth_uri = http://13.13.0.3:5005/v3
auth_url = http://13.13.0.3:35362/v3
memcached_servers = "mgm-net0:11211,mgm-net1:11211"
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = gnocchi
password = aaa12345+

[statsd]
resource_id = 8e5f0fc4-72ad-4abe-9541-afcde4599122
user_id = 5da962a0-273d-4ee4-9645-9cb6cfe1eaef
project_id = c43b31e1-9db1-46dc-a99f-a7a0b6b4ab90
archive_policy_name = low
flush_delay = 5

[storage]
driver = ceph
ceph_pool = gnocchi

[metricd]
workers = 10

[api]
middlewares = oslo_middleware.cors.CORS
auth_mode = keystone

Is there any thing wrong about this config?

Is coordination_url must be set?

regional incoming storage targets

copy of thead discussion here: http://lists.openstack.org/pipermail/openstack-dev/2017-May/117741.html

here's a scenario: i'd like aggregates stored centrally like it does
currently with ceph/swift/s3 drivers but i want to collect data from
many different regions spanning globe. they can all hit the same
incoming storage but:

  • that will be a hell of a lot of load
  • single incoming storage locality might not be optimal for all regions
    causing the write performance to take longer than needed for a 'cache'
    storage
  • sending HTTP POST with JSON payload probably more bandwidth than
    binary serialised format gnocchi uses internally.

i'm thinking it'd be good to support ability to have each region store
data 'locally' to minimise latency and then have regional metricd agents
aggregate into a central target. this is technically possible right now
by just declaring regional (write-only?) APIs with same storage target
and indexer targets but a different incoming target per region. the
problem i think is how to handle coordination_url. it cannot be the same
coordination_url since that would cause sack locks to overlap. if
they're different, then i think there's an issue with having a
centralised API (in addition to regional APIs). specifically, the
centralised API cannot 'refresh'.

  1. look at supporting this fully
  2. document small, medium, large deployment strategies

TypeError: 'Retrying' object is not callable

gnocchi-metricd is not working in devstack-stable/ocata

In gnocchi-metricd .log:

2017-07-20 05:04:06,805 [25546] ^[[01;31mERROR cotyledon._utils: Unhandled exception^[[0m
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cotyledon/_utils.py", line 95, in exit_on_exception
yield
File "/usr/lib/python2.7/site-packages/cotyledon/_service.py", line 139, in _run
self.run()
File "/opt/stack/gnocchi/gnocchi/cli.py", line 131, in run
self._configure()
File "/usr/lib/python2.7/site-packages/tenacity/init.py", line 87, in wrapped_f
return r.call(f, *args, **kw)
File "/usr/lib/python2.7/site-packages/tenacity/init.py", line 235, in call
do = self.iter(result=result, exc_info=exc_info)
File "/usr/lib/python2.7/site-packages/tenacity/init.py", line 194, in iter
return fut.result()
File "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line 398, in result
return self.__get_result()
File "/usr/lib/python2.7/site-packages/tenacity/init.py", line 238, in call
result = fn(*args, **kwargs)
File "/opt/stack/gnocchi/gnocchi/cli.py", line 198, in _configure
self.conf.storage.coordination_url)
TypeError: 'Retrying' object is not callable

Is this related about this recent commit?
112bf62

I'm using tenacity==3.7.1 which is pinned in requirements/upper-constraints.txt

bulk GET metrics support

it'd be nice to be able to support retrieving multiple metrics at once if we want to minimise API requests. there are multiple dimensions we can do this on.

  1. get all metrics for resource A
  2. get all 'cpu_util' for all resources
  3. get any metric on any resource

i think (1) makes sense, but do we need to consider the other two cases? if just the first case, we can have a payload like {'cpu_util': [(time, value), ...], 'memory': [(time, value), ...]}. if we need to handle cross resources, the payload would probably need another level for resource id (or we use metric_id, but i don't think that's a good idea).

Cannot read resources and etc. whilst using auth_mode as keystone or noauth

I'm planning to send all metrics collected by gnocchi to graph - grafana.

Metrics are sent and collected to and by gnocchi, and ceilometer - telemetry - is doing the job.
I only can list metrics and resources in basic mode. In noauth mode, it shows me empty - or sometimes 403 -, and in keystone mode it comes up with either the following result :

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>500 Internal Server Error</title>
</head><body>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error or
misconfiguration and was unable to complete
your request.</p>
<p>Please contact the server administrator at 
 root@localhost to inform them of the time this error occurred,
 and the actions you performed just before this error.</p>
<p>More information about this error may be available
in the server error log.</p>
</body></html>
 (HTTP 500)

or the following ( when noauth is 403 ) :

The request you have made requires authentication. (HTTP 401)

I ran tcpdump, and all of three methods are sending one thing - shown below - but getting different results:

Host: localhost:8041
Connection: keep-alive
Accept-Encoding: gzip, deflate
Accept: application/json, */*
User-Agent: gnocchi keystoneauth1/2.20.0 python-requests/2.10.0 CPython/2.7.5
Authorization: basic YWRtaW46

Gnocchi, Ceilometer and Keystone are configured as shown below, I just sometimes play with domain and ip addresses, but everything is setup on one host :

Gnocchi

[DEFAULT]                                                                              
debug = true                                                                           
verbose = true                                                                         
                                                                                       
[api]                                                                                  
workers = 4                                                                            
paste_config = /usr/lib/python2.7/site-packages/gnocchi/rest/api-paste.ini             
auth_mode = keystone                                                                   
                                                                                       
[cors]                                                                                 
allowed_origin = *                                                                     
allow_credentials = false                                                              
                                                                                       
[database]                                                                             
backend = sqlalchemy                                                                   
                                                                                       
                                                                                       
[indexer]                                                                              
url = "postgresql://gnocchi:[email protected]/gnocchi"                                 
                                                                                       
[metricd]                                                                              
workers = 8                                                                            
                                                                                       
                                                                                       
[statsd]                                                                               
host = 0.0.0.0                                                                         
port = 8125                                                                            
                                                                                       
[storage]                                                                              
driver = file                                                                          
file_basepath = /var/lib/gnocchi                                                       
                                                                                       
[keystone_authtoken]                                                                   
auth_type = password                                                                   
auth_uri = http://127.0.0.1:5000/v3                             
auth_uri = http://127.0.0.1:35357                               
memcached_servers = http://127.0.0.1:11211                                             
project_domain_name = default                                                          
user_domain_name = default                                                             
project_name = service                                                                 
username = gnocchi                                                                     
password = gnocchi                                                                     
interface = internalURL                                                                
region_name = RegionX                                                            

Ceilometer

[DEFAULT]
transport_url = rabbit://openstack:[email protected]
meter_dispatchers=gnocchi
event_dispatchers=gnocchi

[api]
gnocchi_is_enabled = true


[dispatcher_gnocchi]
filter_service_activity = False
archive_policy = low
filter_project = gnocchi
url = http://127.0.0.1:8041
auth_section = service_credentials_gnocchi
resources_definition_file = gnocchi_resources.yaml


[ipmi]
token_cache_time = -1

#whilst using noauth:
[service_credentials_gnocchi]
auth_type=gnocchi-noauth
roles = admin
user_id = a4229d64a35a4886abe50af025190d8f
project_id = dee5750a9d534f79aafdeb2074ae0369
endpoint = http://127.0.0.1:8041

[service_credentials]
auth_url = http://127.0.0.1:5000/v3
project_domain_id = default
user_domain_id = default
auth_type = password
username = ceilometer
project_name = service
password = 123456
interface = internalURL
region_name = Region_LAB

Keystone

[DEFAULT]
admin_token = c01f11a39faceecde032

[catalog]
driver = keystone.catalog.backends.sql.Catalog

[cors]
allowed_origin = *
allow_credentials = true
expose_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token
allow_methods = GET,PUT,POST,DELETE,PATCH
allow_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-Domain-Id,X-Domain-Name

[database]
connection = mysql+pymysql://keystone:[email protected]/keystone

[eventlet_server]
bind_host = 127.0.0.1
public_bind_host = 127.0.0.1
admin_bind_host = 127.0.0.1

[identity]
driver = keystone.identity.backends.sql.Identity

[token]
provider = fernet
[tokenless_auth]
[trust]

I would really appreciate if anyone can tell me if it's a bug, or a misconfiguration, it's been days working on it but getting no fix. Thanks.

PostgreSQL driver should set the driver

WARNING oslo_db.sqlalchemy.engines: URL postgresql://localhost/postgres?host=/var/folders/7k/pwdhb_mj2cv4zyr0kyrlzjx40000gq/T/tmpzbm8cE&port=9824 does not contain a '+drivername' portion, and will make use of a default driver.  A full dbname+drivername:// protocol is recommended.

Add an alerting engine

There's no alerting engine in Gnocchi. It should offer an easy solution to trigger actions on e.g. threshold.

The Aodh project from OpenStack supports Gnocchi, but it does that mainly by polling the API regularly. It's pretty slow at the end of the day and OpenStack specific.

We might need to add some other features first, but this issue should be a placeholder to discuss a design.

periodic gate failures - npm

gate will occasionally fail attempting to install npm

npm ERR! network read ECONNRESET
npm ERR! network This is most likely not a problem with npm itself
npm ERR! network and is related to network connectivity.
npm ERR! network In most cases you are behind a proxy or have bad network settings.
npm ERR! network
npm ERR! network If you are behind a proxy, please make sure that the
npm ERR! network 'proxy' config is set properly. See: 'npm help config'
npm ERR! Linux 4.4.0-51-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "install" "s3rver" "--global"
npm ERR! node v4.2.6
npm ERR! npm v3.5.2
npm ERR! code ECONNRESET
npm ERR! errno ECONNRESET
npm ERR! syscall read

support rolling computations

rolling average and other rolling computations are commonly used by data 'scientist' and 'quants'... we should probably provide them with this functionality.

we support moving average but it's some custom python code which is arguably not correct (and now deprecated). we should just use pandas.

in case someone wants to work on this before i start.

linear search used for back window handling

we use a linear search for detecting which points to ignore that are too old for back window[1]. this works great if all the points you put in our not older than backwindow. depending on new measures size, if anywhere ~5-10% of points are older than back window, linear search is slower. if the new measures size is really large and a lot of points are older than backwindow, the performance becomes terrible.

that said, i imagine the majority of cases, linear search is fastest solution. the question is, do we want to use bisect to handle the odd cases where we do have points to ignore and cause a 'slower' path for the majority.

alternatively, we can also just build new measures as a pandas series since it is a series and is going to be merged into one eventually. (i haven't tested the performance of this path)

[1]

if timestamp >= first_block_timestamp:
values = values[index:]
break

Enhance un-aggregated timeserie storage

The unaggregated (raw) time series that each metric has (the back window) is currently stored as one giant blob. Even if the back window is limited to e.g. 1 hour, this can still be millions of points down to the nanoseconds. This can make it huge, in theory, even if in practice it's never the case.

We should find a way to store it than a giant blob.

Leverage Swift DLO to store Carbonara splits

After chatting with @thiagodasilva, it seems possible to use Swift DLO mechanism to append new aggregated measure to Carbonara splits directly.

Creating a DLO manifest for a given split and then appending to it using PUT requests, without reading the previous data. As the Ceph driver does it.

The only question right now is that this would create a lot of small files in Swift of a few bytes (1 point is 9 bytes) for each write, so that means a lot of very small files potentially – up to 3600 files of 9 bytes for a whole split in the worst case scenario. It's not clear if Swift is able to handle that correctly.

gnocchi doc gen on verge of failing

i'm pretty sure gnocchi.xyz docs job will fail soon. looking at logs, you can see it's using master libraries to build stable docs. if any of our libs break api, it will break gnocchi docs build regardless if we fix it in master.

ie. the logs will show lz4 warnings and pbr error on stable/*

�[36m=> Building ref: stable/3.1�[39m
Running Sphinx v1.5.6
making output directory...
ERROR:root:Error parsing
Traceback (most recent call last):
File "/home/tester/src/.tox/docs-gnocchi.xyz/local/lib/python2.7/site-packages/pbr/core.py", line 111, in pbr
attrs = util.cfg_to_args(path, dist.script_args)
File "/home/tester/src/.tox/docs-gnocchi.xyz/local/lib/python2.7/site-packages/pbr/util.py", line 249, in cfg_to_args
pbr.hooks.setup_hook(config)
File "/home/tester/src/.tox/docs-gnocchi.xyz/local/lib/python2.7/site-packages/pbr/hooks/init.py", line 25, in setup_hook
metadata_config.run()
File "/home/tester/src/.tox/docs-gnocchi.xyz/local/lib/python2.7/site-packages/pbr/hooks/base.py", line 27, in run
self.hook()
File "/home/tester/src/.tox/docs-gnocchi.xyz/local/lib/python2.7/site-packages/pbr/hooks/metadata.py", line 26, in hook
self.config['name'], self.config.get('version', None))
File "/home/tester/src/.tox/docs-gnocchi.xyz/local/lib/python2.7/site-packages/pbr/packaging.py", line 755, in get_version
name=package_name))
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name gnocchi was given, but was not able to be found.
error in setup command: Error parsing /tmp/tmpr2_VHesphinxcontrib_versioning/22b7d4e4a2de2de77ad26795a9de8bf241e07293/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name gnocchi was given, but was not able to be found.
2017-06-27 19:25:09.297 1457 WARNING oslo_db.sqlalchemy.engines [-] URL postgresql://localhost/postgres?host=/tmp/tmpUB6_PY&port=9824 does not contain a '+drivername' portion, and will make use of a default driver. A full dbname+drivername:// protocol is recommended.
2017-06-27 19:25:09.308 1457 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2017-06-27 19:25:09.308 1457 INFO alembic.runtime.migration [-] Will assume transactional DDL.
2017-06-27 19:25:09.371 1457 WARNING oslo_db.sqlalchemy.engines [-] URL postgresql://localhost/postgres?host=/tmp/tmpUB6_PY&port=9824 does not contain a '+drivername' portion, and will make use of a default driver. A full dbname+drivername:// protocol is recommended.
2017-06-27 19:25:09.380 1457 INFO alembic.runtime.migration [-] Context impl PostgresqlImpl.
2017-06-27 19:25:09.380 1457 INFO alembic.runtime.migration [-] Will assume transactional DDL.
2017-06-27 19:25:09.396 1457 INFO alembic.runtime.migration [-] Running stamp_revision -> 1e1a63d3d186
2017-06-27 19:25:09.437 1457 INFO gnocchi.rest.app [-] WSGI config used: /tmp/tmpr2_VHesphinxcontrib_versioning/22b7d4e4a2de2de77ad26795a9de8bf241e07293/gnocchi/rest/api-paste.ini
/home/tester/src/.tox/docs-gnocchi.xyz/local/lib/python2.7/site-packages/lz4/init.py:30: DeprecationWarning: Call to deprecated function or method dumps (use lz4.block.compress instead).
def dumps(source):
/home/tester/src/.tox/docs-gnocchi.xyz/local/lib/python2.7/site-packages/lz4/init.py:30: DeprecationWarning: Call to deprecated function or method dumps (use lz4.block.compress instead).
def dumps(source):
/home/tester/src/.tox/docs-gnocchi.xyz/local/lib/python2.7/site-packages/lz4/init.py:30: DeprecationWarning: Call to deprecated function or method dumps (use lz4.block.compress instead).
def dumps(source):
/home/tester/src/.tox/docs-gnocchi.xyz/local/lib/python2.7/site-packages/lz4/init.py:30: DeprecationWarning: Call to deprecated function or method dumps (use lz4.block.compress instead).
def dumps(source):
/home/tester/src/.tox/docs-gnocchi.xyz/local/lib/python2.7/site-packages/numpy/core/_methods.py:135: RuntimeWarning: Degrees of freedom <= 0 for slice
keepdims=keepdims)
/home/tester/src/.tox/docs-gnocchi.xyz/local/lib/python2.7/site-packages/numpy/core/_methods.py:127: RuntimeWarning: invalid value encountered in double_scalars

Failed to call periodic 'gnocchi.cli.run_watchers' after redis switch master-slave

Issue:
We used redis as storage driver, redis nodes was configured to master-slave mode and managed by redis-sentinel for HA. The option "redis_url" in gnocchi.conf was set to redis-sentinel, so that the redis will automatically switch master-slave by redis-sentinel and without any change in gnocchi.
But, after redis switch master-slave, we can always see the error "Failed to call periodic 'gnocchi.cli.run_watchers'" in gnocchi-metricd.log until restart the gnocchi-metricd.service.

Environment:
Linux: CentOS 7.2
Gnocchi: 4.0
Redis: redis-3.2.3-1
Tooz: 1.57.0

Reproduce:

  1. Install more than one redis nodes, and configure to master-slave mode.
  2. Install redis-sentinel and configure it to manage redis nodes.
  3. Configure gnoochi.conf to make it connect to redis-sentinel.
    the configuration in my site is (FYI):
driver = redis
redis_url = redis://redis:[email protected]:6380?sentinel=mymaster 
  1. Stop the redis.service on redis master node. (redis-sentinel will elect a new master)
  2. After a few seconds, we will see the error in gnocchi-metricd.log

Log:

2017-07-05 14:47:10,567 [1834] ERROR futurist.periodics: Failed to call periodic 'gnocchi.cli.run_watchers' (it runs every 30.00 seconds)
Traceback (most recent call last):
  File "/opt/openstack/gnocchi/gnocchi-env/lib/python2.7/site-packages/futurist/periodics.py", line 290, in run
    work()
  File "/opt/openstack/gnocchi/gnocchi-env/lib/python2.7/site-packages/futurist/periodics.py", line 64, in __call__
    return self.callback(*self.args, **self.kwargs)
  File "/opt/openstack/gnocchi/gnocchi-env/lib/python2.7/site-packages/futurist/periodics.py", line 178, in decorator
    return f(*args, **kwargs)
  File "/opt/openstack/gnocchi/gnocchi-env/lib/python2.7/site-packages/gnocchi/cli.py", line 203, in run_watchers
    self.coord.run_watchers()
  File "/opt/openstack/gnocchi/gnocchi-env/lib/python2.7/site-packages/tooz/drivers/redis.py", line 745, in run_watchers
    result = super(RedisDriver, self).run_watchers(timeout=timeout)
  File "/opt/openstack/gnocchi/gnocchi-env/lib/python2.7/site-packages/tooz/coordination.py", line 729, in run_watchers
    timeout=w.leftover(return_none=True))
  File "/opt/openstack/gnocchi/gnocchi-env/lib/python2.7/site-packages/tooz/coordination.py", line 663, in get
    return self._fut.result(timeout=timeout)
  File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
    self.gen.throw(type, value, traceback)
  File "/opt/openstack/gnocchi/gnocchi-env/lib/python2.7/site-packages/tooz/drivers/redis.py", line 51, in _translate_failures
    cause=e)
  File "/opt/openstack/gnocchi/gnocchi-env/lib/python2.7/site-packages/tooz/utils.py", line 225, in raise_with_cause
    excutils.raise_with_cause(exc_cls, message, *args, **kwargs)
  File "/opt/openstack/gnocchi/gnocchi-env/lib/python2.7/site-packages/oslo_utils/excutils.py", line 143, in raise_with_cause
    six.raise_from(exc_cls(message, *args, **kwargs), kwargs.get('cause'))
  File "/opt/openstack/gnocchi/gnocchi-env/lib/python2.7/site-packages/six.py", line 718, in raise_from
    raise value
ToozConnectionError: Error 111 connecting to 10.127.2.122:6379. Connection refused.

Error log.txt

keepalive won't work with our uwsgi config

It's not clear why yet, but with uwsgi http keepalive doesn't work with Gnocchi.

even if we use http11-socket or, http-keepalive and add-header="Connection: Keep-Alive".

The current workaround is to set --add-header "Connection: close".

Gnocchi metrics aggregation missing granularity exception

Hello,

We use openstack and gnocchi service for our project and we got an usual behavior while trying to aggregate metrics for "instance_network_interface".

I've traced the exception to its roots and if a granularity is not present when trying to do the aggregation an exception will be thrown, but it doesn't fail gracefully, it fails with a html 500 response (I'm guessing this is the standard exception when pecan can't handle something)

I'll get right to it:
I'm looping through some network metrics as follows (this being just a sample of the code)

metrics = (('network.incoming.bytes.rate', 'bytes_in'),
           ('network.incoming.packets.rate', 'packets_in'),
           ('network.outgoing.bytes.rate', 'bytes_out'),
           ('network.outgoing.packets.rate', 'packets_out'))

for metric in metrics:
    response[metric[1]] = self.gnocchi_admin.metric.aggregation(metrics=metric[0],
                                                                granularity=granularity,
                                                                start=period_start,
                                                                stop=period_end,
                                                                reaggregation='mean',
                                                                resource_type=resource_type,
                                                                query={'=': {'instance_id': resource_id}})

gnocchi_admin is the python gnocchi client which uses the gnocchiclient.client.metric.aggregation which in turn does a request post under v1/aggregation/resource/instance_network_interface/metric/ with some query params (granularity, reaggregation, start, end date)

However this fails with a html 500 response as follows

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>500 Internal Server Error</title>
</head><body>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error or
misconfiguration and was unable to complete
your request.</p>
<p>Please contact the server administrator at 
 [no address given] to inform them of the time this error occurred,
 and the actions you performed just before this error.</p>
<p>More information about this error may be available
in the server error log.</p>
</body></html>
 (HTTP 500)

I connected to the gnocchi server and traced the error which fails with the following traceback

2017-06-27 16:04:15.381016 mod_wsgi (pid=11079): Exception occurred processing WSGI script '/var/www/cgi-bin/gnocchi/gnocchi-api'.
2017-06-27 16:04:15.381105 Traceback (most recent call last):
2017-06-27 16:04:15.381133   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
2017-06-27 16:04:15.381693     resp = self.call_func(req, *args, **self.kwargs)
2017-06-27 16:04:15.381722   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
2017-06-27 16:04:15.381945     return self.func(req, *args, **kwargs)
2017-06-27 16:04:15.381977   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/oslo_middleware/base.py", line 126, in __call__
2017-06-27 16:04:15.382352     response = req.get_response(self.application)
2017-06-27 16:04:15.382390   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/request.py", line 1299, in send
2017-06-27 16:04:15.383169     application, catch_exc_info=False)
2017-06-27 16:04:15.383197   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/request.py", line 1263, in call_application
2017-06-27 16:04:15.383230     app_iter = application(self.environ, start_response)
2017-06-27 16:04:15.383513   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/paste/urlmap.py", line 216, in __call__
2017-06-27 16:04:15.383972     return app(environ, start_response)
2017-06-27 16:04:15.383999   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
2017-06-27 16:04:15.384248     resp = self.call_func(req, *args, **self.kwargs)
2017-06-27 16:04:15.384273   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
2017-06-27 16:04:15.384632     return self.func(req, *args, **kwargs)
2017-06-27 16:04:15.384656   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/oslo_middleware/base.py", line 126, in __call__
2017-06-27 16:04:15.384878     response = req.get_response(self.application)
2017-06-27 16:04:15.384903   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/request.py", line 1299, in send
2017-06-27 16:04:15.385068     application, catch_exc_info=False)
2017-06-27 16:04:15.385089   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/request.py", line 1263, in call_application
2017-06-27 16:04:15.385282     app_iter = application(self.environ, start_response)
2017-06-27 16:04:15.385306   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
2017-06-27 16:04:15.385324     resp = self.call_func(req, *args, **self.kwargs)
2017-06-27 16:04:15.385332   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
2017-06-27 16:04:15.385345     return self.func(req, *args, **kwargs)
2017-06-27 16:04:15.385354   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", line 335, in __call__
2017-06-27 16:04:15.388294     response = req.get_response(self._app)
2017-06-27 16:04:15.388486   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/request.py", line 1299, in send
2017-06-27 16:04:15.388690     application, catch_exc_info=False)
2017-06-27 16:04:15.388994   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/request.py", line 1263, in call_application
2017-06-27 16:04:15.389161     app_iter = application(self.environ, start_response)
2017-06-27 16:04:15.389316   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/webob/exc.py", line 1169, in __call__
2017-06-27 16:04:15.390364     return self.application(environ, start_response)
2017-06-27 16:04:15.390844   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/gnocchi/rest/app.py", line 68, in __call__
2017-06-27 16:04:15.391385     return self.app(environ, start_response)
2017-06-27 16:04:15.391548   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/pecan/middleware/recursive.py", line 56, in __call__
2017-06-27 16:04:15.391842     return self.application(environ, start_response)
2017-06-27 16:04:15.392120   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/pecan/core.py", line 840, in __call__
2017-06-27 16:04:15.392606     return super(Pecan, self).__call__(environ, start_response)
2017-06-27 16:04:15.392744   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/pecan/core.py", line 683, in __call__
2017-06-27 16:04:15.392924     self.invoke_controller(controller, args, kwargs, state)
2017-06-27 16:04:15.393217   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/pecan/core.py", line 574, in invoke_controller
2017-06-27 16:04:15.393364     result = controller(*args, **kwargs)
2017-06-27 16:04:15.393511   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/gnocchi/rest/__init__.py", line 1505, in post
2017-06-27 16:04:15.394345     granularity, needed_overlap, fill, refresh, resample)
2017-06-27 16:04:15.394507   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/gnocchi/rest/__init__.py", line 1614, in get_cross_metric_measures_from_objs
2017-06-27 16:04:15.394683     granularity, resample)
2017-06-27 16:04:15.394806   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/gnocchi/storage/_carbonara.py", line 156, in get_measures
2017-06-27 16:04:15.397060     from_timestamp, to_timestamp)
2017-06-27 16:04:15.397473   File "/openstack/venvs/gnocchi-15.1.4/lib/python2.7/site-packages/gnocchi/storage/_carbonara.py", line 187, in _get_measures_timeserie
2017-06-27 16:04:15.397668     raise storage.GranularityDoesNotExist(metric, granularity)
2017-06-27 16:04:15.397829 GranularityDoesNotExist: Granularity '1800.0' for metric 615e740b-d897-4f69-9d60-c5b48c1b6293 does not exist

So the next thing was to follow the traceback and I've made it to the point where it fails:
https://github.com/gnocchixyz/gnocchi/blob/master/gnocchi/rest/__init__.py#L1653

This was for my case but it could fail anywhere in the try except block if the granularity is not found

So evidently I've added an

except GranularityDoesNotExist as e:
    abort(404, e)

at the bottom and everything worked fine (failed gracefully), which in turn helped me catch exceptions explicitly etc.

Note that we know what the problem was in the first place (we didn't have a granularity defined in the ARCHIVE POLICY, and have an ARCHIVE POLICY RULE point to our metrics) and we could have avoided the exception doing that, but there are a lot of configurations out there, and I'm guessing this is an issue.

Thanks in advance

Allow to not use existing archive policies when creating a metric

Archive policies are very useful and handy. However they also limit what a user can do with the metrics. If an application wants to create a metric and really knows what kind of aggregation methods it needs, there's no way for it to bypass the archive policy mechanism.

Historically, archive policies were implement as a mechanism to limit what a user can do and as a template, but it seems that to me that this limitation does not make much sense anymore and is actually preventing smarter definition of metrics.

Coordination member_id should be string according to tooz source

a8eb25e introduced a change to align with tooz docs but according to the source code tooz expects the member_id to be a string (https://github.com/openstack/tooz/blame/master/tooz/coordination.py#L754-L755).

    :param member_id: the id of the member
    :type member_id: str

When using bytes as per tooz documentation the library fails to initialise due to a utf8 conversion.

Some drivers of tooz are still not ready, memcache text protocol has an issue and so does the consul driver which still expects a string which it decodes to form a session_name.

I propose that the above change be rolled back until tooz fully updates its drivers.

Gnocchi + Keystone = The request you have made requires authentication. (HTTP 401)

Hi,
I have Openstack ocata configured on Ubuntu 16.0.4-2 and for telemetry service I have ceilometer configured with gnocchi . I followed the document from http://gnocchi.xyz/ to install gnocchi but unfortunately my gnocchi authentication with keystone is not working (auth_mode = keystone).
With auth_mode=noauth works fine.

My Gnocchi config uses mysql database for Indexer , files for storage and keystone for authentication.

Below is the error message ...
gnocchi status
The request you have made requires authentication. (HTTP 401)


openstack metric resource-type list
Failed to discover available identity versions when contacting http://hsdatactlr1.engba.veritas.com:35357/v3. Attempting to parse version from URL.
Internal Server Error (HTTP 500)

Below is my configuration details ......


Ceilometer :

cat /etc/ceilometer/ceilometer.conf | grep -iv ^[#] | sed -e '/^$/d'
[DEFAULT]
meter_dispatchers=gnocchi
event_dispatchers=gnocchi
transport_url = rabbit://openstack:[email protected]
[api]
[collector]
[compute]
[coordination]
[cors]
[cors.subdomain]
[database]
[dispatcher_file]
[dispatcher_gnocchi]
filter_service_activity = False
archive_policy = low
[dispatcher_http]
[event]
[hardware]
[ipmi]
[keystone_authtoken]
auth_uri = http://hsdatactlr1.engba.veritas.com:5000
auth_url = http://hsdatactlr1.engba.veritas.com:35357
memcached_servers = hsdatactlr1.engba.veritas.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = ceilometer
password = hyperscale
[matchmaker_redis]
[meter]
[notification]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[polling]
[publisher]
[publisher_notifier]
[rgw_admin_credentials]
[service_credentials]
auth_type = password
auth_url = http://hsdatactlr1.engba.veritas.com:5000/v3
project_domain_name = default
user_domain_name = default
project_name = service
username = ceilometer
password = hyperscale
interface = internalURL
region_name = RegionOne
[service_types]
[storage]
[vmware]
[xenapi]


Gnocchi Config :

cat /etc/gnocchi/gnocchi.conf | grep -iv ^[#] | sed -e '/^$/d'
[DEFAULT]
verbose = true
log_dir = /var/log/gnocchi
[api]
auth_mode = keystone
[archive_policy]
[cors]
[database]
connection = mysql+pymysql://gnocchi-common:[email protected]/gnocchidb
[healthcheck]
[incoming]
[indexer]
url = mysql+pymysql://gnocchi:[email protected]/gnocchi
[keystone_authtoken]
auth_uri = http://hsdatactlr1.engba.veritas.com:5000
auth_url = http://hsdatactlr1.engba.veritas.com:35357
memcached_servers = hsdatactlr1.engba.veritas.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = gnocchi-common
password = hyperscale
interface = internalURL
region_name = RegionOne
service_token_roles_required = True
[metricd]
[oslo_middleware]
[oslo_policy]
[statsd]
[storage]
file_basepath = /var/lib/gnocchi


Gnocchi api config :

cat /etc/gnocchi/api-paste.ini | grep -iv ^[#] | sed -e '/^$/d'
[composite:gnocchi+noauth]
use = egg:Paste#urlmap
/ = gnocchiversions_pipeline
/v1 = gnocchiv1+noauth
/healthcheck = healthcheck
[composite:gnocchi+basic]
use = egg:Paste#urlmap
/ = gnocchiversions_pipeline
/v1 = gnocchiv1+noauth
/healthcheck = healthcheck
[composite:gnocchi+keystone]
use = egg:Paste#urlmap
/ = gnocchiversions_pipeline
/v1 = gnocchiv1+keystone
/healthcheck = healthcheck
[pipeline:gnocchiv1+noauth]
pipeline = http_proxy_to_wsgi gnocchiv1
[pipeline:gnocchiv1+keystone]
pipeline = http_proxy_to_wsgi keystone_authtoken gnocchiv1
[pipeline:gnocchiversions_pipeline]
pipeline = http_proxy_to_wsgi gnocchiversions
[app:gnocchiversions]
paste.app_factory = gnocchi.rest.app:app_factory
root = gnocchi.rest.VersionsController
[app:gnocchiv1]
paste.app_factory = gnocchi.rest.app:app_factory
root = gnocchi.rest.V1Controller
[filter:keystone_authtoken]
use = egg:keystonemiddleware#auth_token
oslo_config_project = gnocchi
[filter:http_proxy_to_wsgi]
use = egg:oslo.middleware#http_proxy_to_wsgi
oslo_config_project = gnocchi
[app:healthcheck]
use = egg:oslo.middleware#healthcheck
oslo_config_project = gnocchi

[pipeline:main]
pipeline = gnocchi_auth keystone_authtoken gnocchi.........


NoAuth Headers

Currently noauth mode needs X-User-Id and X-Project-Id headers for the request. I also noticed that the NoAuthHelper inherits from KeystoneAuthHelper.

Does this mean that X-User-Id and X-Project-Id actually represent an OpenStack user and project, respectively? If this is the case, I am bit confused on why, since the noauth mode shouldn't be related to OpenStack at all, if I am not mistaken.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.