fabric8-analytics / fabric8-analytics-common Goto Github PK
View Code? Open in Web Editor NEWfabric8-analytics core common development
License: Apache License 2.0
fabric8-analytics core common development
License: Apache License 2.0
Given the stack (a list of components/packages) -
This is cause by different status message returned by check scripts. How to fix it: test for "special" status messages saying that every N source files are ok.
We have a test for stack analysis of Python's requirements.txt
. It would be great if we could add similar test for pom.xml
. I can provide sample pom.xml
file, if needed.
Given the job with ID=TEST2 already exist:
> POST /api/v1/jobs/flow-scheduling?job_id=TEST2&state=paused HTTP/1.1
> Host: stage_host
> User-Agent: curl/7.51.0
> Content-Type: application/json
> Accept: application/json
> auth-token: _______
> Content-Length: 439
>
* upload completely sent off: 439 out of 439 bytes
< HTTP/1.1 401 UNAUTHORIZED
< Content-Type: application/json
< Content-Length: 66
< Set-Cookie: _____; path=/; HttpOnly
<
{
"error": "Job with the given job id 'TEST2' already exists"
}
The error message is ok, but the HTTP code is a bit misleading
The following scenario Check the Gemini API endpoint 'stacks-report/report failed recently
(see https://ci.centos.org/job/devtools-e2e-fabric8-analytics/937/console)
Scenario: Check the Gemini API endpoint 'stacks-report/report' # features/gemini.feature:210
Given System is running # features/steps/common.py:29
When I access the /api/v1/stacks-report/report endpoint of Gemini service for STAGE/monthly/201902.json report # features/steps/gemini.py:80
Then I should get 200 status code # features/steps/common.py:90
Assertion Failed: assert 500 == 200
+ where 500 = <Response [500]>.status_code
+ where <Response [500]> = <behave.runner.Context object at 0x7fa079d15e10>.response
Then I should get a valid report # None
Deploying on DevCluster ( OpenShift ) can be done using deploy.sh script as described in README.md
However this process hasn't worked for me yet.
As shown in the image, many services fail to run for me.
An updated documentation is required that describes:
@yzainee do we really need the gemini_fix
branch? If not, please delete it or ask me to do so.
https://github.com/fabric8-analytics/fabric8-analytics-common/branches
Active branches
gemini_fix
Updated 3 months ago by Yusuf Zainee
URL to the repository: https://github.com/fabric8-analytics/victimsdb-lib
URL to the repository: https://github.com/fabric8-analytics/f8a-stacks-report
In the README.md, there's link to API examples:
https://github.com/fabric8-analytics/examples
Unfortunately the link is not correct (probably caused by renaming repositories?)
Start the services using Docker Compose as below:
$ ./docker-compose.sh up
Failure output:
data-model-importer_1 | [2017-05-29 06:59:29 +0000] [6] [INFO] Starting gunicorn 19.7.1
data-model-importer_1 | [2017-05-29 06:59:29 +0000] [6] [INFO] Listening at: http://0.0.0.0:9192 (6)
data-model-importer_1 | [2017-05-29 06:59:29 +0000] [6] [INFO] Using worker: sync
data-model-importer_1 | [2017-05-29 06:59:29 +0000] [11] [INFO] Booting worker with pid: 11
data-model-importer_1 | [2017-05-29 06:59:33 +0000] [11] [ERROR] Exception in worker process
data-model-importer_1 | Traceback (most recent call last):
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/arbiter.py", line 578, in spawn_worker
data-model-importer_1 | worker.init_process()
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/workers/base.py", line 126, in init_process
data-model-importer_1 | self.load_wsgi()
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/workers/base.py", line 135, in load_wsgi
data-model-importer_1 | self.wsgi = self.app.wsgi()
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
data-model-importer_1 | self.callable = self.load()
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
data-model-importer_1 | return self.load_wsgiapp()
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
data-model-importer_1 | return util.import_app(self.app_uri)
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/util.py", line 352, in import_app
data-model-importer_1 | __import__(module)
data-model-importer_1 | File "/src/rest_api.py", line 23, in <module>
data-model-importer_1 | if not BayesianGraph.is_index_created():
data-model-importer_1 | File "/src/graph_manager.py", line 76, in is_index_created
data-model-importer_1 | status, json_result = cls.execute(str_gremlin_dsl)
data-model-importer_1 | File "/src/graph_manager.py", line 48, in execute
data-model-importer_1 | data=json.dumps(payload))
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/requests/api.py", line 112, in post
data-model-importer_1 | return request('post', url, data=data, json=json, **kwargs)
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/requests/api.py", line 58, in request
data-model-importer_1 | return session.request(method=method, url=url, **kwargs)
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 518, in request
data-model-importer_1 | resp = self.send(prep, **send_kwargs)
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 639, in send
data-model-importer_1 | r = adapter.send(request, **kwargs)
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 502, in send
data-model-importer_1 | raise ConnectionError(e, request=request)
data-model-importer_1 | ConnectionError: HTTPConnectionPool(host='bayesian-gremlin-http', port=8182): Max retries exceeded with url: / (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x26b7510>: Failed to establish a new connection: [Errno 111] Connection refused',))
The issue is that data-model-importer depends on gremlin-http to be up. By the time gremlin-http starts, the importer already had failed trying to connect.
We could use some sort of a delay mechanism to delay starting of data-model-importer until gremlin-http has already started.
Related thread and SO post:
Current stack analysis is driven with pre-cooked application stacks.
The proposal indicated below is going to understand the intent of the an OSIO user's application stack and derive recommendations based on this intent. Along with this, this proposal covers the 'additional information' for the user's application stack and all the generated recommendations. Goal of providing this additional information is to convince the OSIO user to act up on the generated recommendations.
This proposal covers showing usage based outliers and license analysis of the user's application stack.
https://docs.google.com/document/d/1VM5JLrM1DCd-hKdB2RAXrarT4sZhtliXV1wRgYLi7Tg/
URL to the repository: https://github.com/fabric8-analytics/fabric8-analytics-rudra
Both the links under API section of the README, are broken.
https://github.com/fabric8-analytics/fabric8-analytics-common/blob/master/README.md#api
The newest VSCode stable version is 1.32.0. Tests needs to be updated to support this version.
there is a task to group all quickstarts of Camel/Fuse.
For now, it is just a Jira issue https://issues.jboss.org/browse/ENTESB-9131 but we can find several samples.
I copy-paste here for convenience:
https://github.com/jboss-fuse/quickstarts
https://access.redhat.com/webassets/avalon/d/Red_Hat_Fuse-7.0-Tooling_User_Guide-en-US/images/e40c9a1ed034a40a1d1ec5dea1fe8e69/nfpTemplateExamples.png
https://github.com/jboss-fuse/fuse-karaf/tree/master/quickstarts
https://github.com/wildfly-extras/wildfly-camel-examples
https://github.com/fabric8-quickstarts/
https://github.com/jboss-fuse/fuse-springboot-circuit-breaker-booster
https://github.com/jboss-fuse/fuse-rest-http-booster
https://github.com/jboss-fuse/fuse-health-check-booster
https://github.com/jboss-fuse/fuse-configmap-booster
https://github.com/jboss-fuse/fuse-crud-booster
https://github.com/jboss-fuse/fuse-rest-http-secured-booster
https://github.com/fusesource/sap-quickstarts
https://github.com/apache/camel/tree/master/examples
https://github.com/RedHatWorkshops/dayinthelife-integration/tree/master/projects
https://github.com/jboss-fuse/spring-boot-camel-xa
Currently just one test step is documented. It should be improved a bit :)
Entries to check in the bayesian-core-package-data
bucket:
{ecosystem}/{package}.json
{ecosystem}/{package}/{analysis}.json
for ["git_stats", "github_details", "keywords_tagging", "libraries_io"]
Currently, every repository inside fabric8-analytics organization can set up automated builds form PRs.
Our aim with deploy script is to get the code from developer to dev cluster ASAP.
The idea is to setup local git hook that will automatically trigger one of the following:
Use docker build command to build images locally on a developers machine.
Either by using CI/CD scripts or manually invoking docker build command.
We can use each developers resources on dev cluster and invoke docker build there.
Defining build config and all configuration options are well documented in openshift documentation.
No idea what's the cause, so filling issue here.
From https://ci.centos.org/job/devtools-f8a-master-deploy-e2e-test/814/console
Scenario: Check the component search functionality for existing component from the npm ecosystem # features/component_search.feature:27
Given System is running # features/steps/common.py:38
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): bayesian-jobs-bayesian-preview.b6ff.rh-idev.openshiftapps.com
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): bayesian-gremlin-http-preview-b6ff-bayesian-preview.b6ff.rh-idev.openshiftapps.com
Given Component search service is running # features/steps/component_analysis.py:12
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
When I acquire the authorization token # features/steps/authorization.py:25
Then I should get the proper authorization token # features/steps/authorization.py:13
When I search for component wisp with authorization token # features/steps/component_analysis.py:43
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
Then I should find the analysis for the component wisp from ecosystem npm # features/steps/component_analysis.py:146
Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/behave/model.py", line 1456, in run
match.run(runner.context)
File "/usr/lib/python3.4/site-packages/behave/model.py", line 1903, in run
self.func(context, *args, **kwargs)
File "/tests/features/steps/component_analysis.py", line 160, in check_component_analysis_existence
format(component=component, ecosystem=ecosystem))
Exception: Component wisp for ecosystem npm could not be found
Captured logging:
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): bayesian-jobs-bayesian-preview.b6ff.rh-idev.openshiftapps.com
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): bayesian-gremlin-http-preview-b6ff-bayesian-preview.b6ff.rh-idev.openshiftapps.com
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
Scenario: Check the component search functionality for existing component from the pypi ecosystem # features/component_search.feature:35
Given System is running # features/steps/common.py:38
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): bayesian-jobs-bayesian-preview.b6ff.rh-idev.openshiftapps.com
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): bayesian-gremlin-http-preview-b6ff-bayesian-preview.b6ff.rh-idev.openshiftapps.com
Given Component search service is running # features/steps/component_analysis.py:12
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
When I acquire the authorization token # features/steps/authorization.py:25
Then I should get the proper authorization token # features/steps/authorization.py:13
When I search for component clojure_py with authorization token # features/steps/component_analysis.py:43
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
Then I should find the analysis for the component clojure_py from ecosystem pypi # features/steps/component_analysis.py:146
Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/behave/model.py", line 1456, in run
match.run(runner.context)
File "/usr/lib/python3.4/site-packages/behave/model.py", line 1903, in run
self.func(context, *args, **kwargs)
File "/tests/features/steps/component_analysis.py", line 160, in check_component_analysis_existence
format(component=component, ecosystem=ecosystem))
Exception: Component clojure_py for ecosystem pypi could not be found
Captured logging:
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): bayesian-jobs-bayesian-preview.b6ff.rh-idev.openshiftapps.com
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): bayesian-gremlin-http-preview-b6ff-bayesian-preview.b6ff.rh-idev.openshiftapps.com
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
Scenario: Check the component search functionality for existing component from the maven ecosystem # features/component_search.feature:43
Given System is running # features/steps/common.py:38
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): bayesian-jobs-bayesian-preview.b6ff.rh-idev.openshiftapps.com
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): bayesian-gremlin-http-preview-b6ff-bayesian-preview.b6ff.rh-idev.openshiftapps.com
Given Component search service is running # features/steps/component_analysis.py:12
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
When I acquire the authorization token # features/steps/authorization.py:25
Then I should get the proper authorization token # features/steps/authorization.py:13
When I search for component vertx with authorization token # features/steps/component_analysis.py:43
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
Then I should find the analysis for the component io.vertx:vertx-core from ecosystem maven # features/steps/component_analysis.py:146
Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/behave/model.py", line 1456, in run
match.run(runner.context)
File "/usr/lib/python3.4/site-packages/behave/model.py", line 1903, in run
self.func(context, *args, **kwargs)
File "/tests/features/steps/component_analysis.py", line 160, in check_component_analysis_existence
format(component=component, ecosystem=ecosystem))
Exception: Component io.vertx:vertx-core for ecosystem maven could not be found
Captured logging:
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): bayesian-jobs-bayesian-preview.b6ff.rh-idev.openshiftapps.com
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): bayesian-gremlin-http-preview-b6ff-bayesian-preview.b6ff.rh-idev.openshiftapps.com
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): recommender.api.prod-preview.openshift.io
When running fabric8 analytics, via sh docker-compose.sh up
, I encounter following error every time:
data-model-importer_1 | [2017-09-28 08:13:34 +0000] [11] [INFO] Worker exiting (pid: 11)
data-model-importer_1 | [2017-09-28 08:13:34 +0000] [6] [INFO] Shutting down: Master
data-model-importer_1 | [2017-09-28 08:13:34 +0000] [6] [INFO] Reason: Worker failed to boot.
data-model-importer_1 | [2017-09-28 08:13:38 +0000] [6] [INFO] Starting gunicorn 19.7.1
data-model-importer_1 | [2017-09-28 08:13:38 +0000] [6] [INFO] Listening at: http://0.0.0.0:9192 (6)
data-model-importer_1 | [2017-09-28 08:13:38 +0000] [6] [INFO] Using worker: gevent
data-model-importer_1 | [2017-09-28 08:13:38 +0000] [11] [INFO] Booting worker with pid: 11
data-model-importer_1 | [2017-09-28 08:13:39 +0000] [11] [ERROR] Exception in worker process
data-model-importer_1 | Traceback (most recent call last):
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/arbiter.py", line 578, in spawn_worker
data-model-importer_1 | worker.init_process()
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/workers/ggevent.py", line 190, in init_process
data-model-importer_1 | super(GeventWorker, self).init_process()
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/workers/base.py", line 126, in init_process
data-model-importer_1 | self.load_wsgi()
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/workers/base.py", line 135, in load_wsgi
data-model-importer_1 | self.wsgi = self.app.wsgi()
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
data-model-importer_1 | self.callable = self.load()
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
data-model-importer_1 | return self.load_wsgiapp()
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
data-model-importer_1 | return util.import_app(self.app_uri)
data-model-importer_1 | File "/usr/lib/python2.7/site-packages/gunicorn/util.py", line 352, in import_app
data-model-importer_1 | __import__(module)
data-model-importer_1 | File "/usr/lib64/python2.7/site-packages/gevent/builtins.py", line 93, in __import__
data-model-importer_1 | result = _import(*args, **kwargs)
data-model-importer_1 | File "/src/rest_api.py", line 31, in <module>
data-model-importer_1 | raise RuntimeError("Failed to initialized graph schema")
data-model-importer_1 | RuntimeError: Failed to initialized graph schema
This issue is opened to track progress of this Trello: https://trello.com/c/zLOXKyW1/712-switch-to-pushregistrydevshiftnet
Changes made so far:
Update 2
Please let's cleanup the repository a bit by reviewing and possibly deleting stale branches:
https://github.com/fabric8-analytics/fabric8-analytics-common/branches/stale
The following document contains outdated information and needs to be updated according to new tests created:
https://github.com/fabric8-analytics/fabric8-analytics-common/blob/master/integration-tests/README.md
Since tests now run against deployment in OpenShift, where authentication is enabled, we will need API tokens in CI (jobs and API services).
The 'About' dialog contains the VSCode version. It would be nice that test check this version in preliminary steps (via the special feature).
New feature:
Given
clause)The scope document follows:
Rules and conventions are mentioned here: https://www.python.org/dev/peps/pep-0257/
ERROR: Error while pulling image: Get
http://docker-registry.usersys.redhat.com/v1/repositories/bayesian/cvedb-s3-dump/images: dial tcp: lookup
docker-registry.usersys.redhat.com on 10.16.36.29:53: no such host
logs : https://gist.github.com/naina-verma/4f13edb544080a78537ab5bbb30b7df7
After running, integration tests against fresh deployment on dev cluster following scenarios are failing
https://paste.fedoraproject.org/paste/E2S6~PFMR4VZ9HzKKL2zdA
A set of tests that measure the performance:
id() function is shadowed by parameters and local variables with the same name in the common.module
This is tracking issue for BAF tests
Feature: Smoke test # features/smoketest.feature:1
Scenario: Check the /schemas entry point # features/smoketest.feature:8
Given System is running # features/steps/common.py:43
When I access /api/v1/schemas/ # features/steps/common.py:266
Then I should get 200 status code # features/steps/common.py:601
Assertion Failed: assert 500 == 200
+ where 500 = <Response [500]>.status_code
+ where <Response [500]> = <behave.runner.Context object at 0x7fb27c2b0278>.response
Captured logging:
Failing scenarios:
features/smoketest.feature:8 Check the /schemas entry point
How do I use my existing OpenShift, or oc cluster up
or minishift environment to do some development?
This is tracking issue for VS Code Visual Tests
(sub-issues to be opened for each point described above)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.