Git Product home page Git Product logo

zowe / zebra Goto Github PK

View Code? Open in Web Editor NEW
21.0 15.0 12.0 12.35 MB

ZEBRA is an open-source incubator project for Zowe. It is a data parsing framework that allows quick and easy access to z/OS performance metrics.

Home Page: https://zebra.talktothemainframe.com

License: Eclipse Public License 2.0

JavaScript 49.10% Dockerfile 0.06% Pug 22.88% CSS 27.96%
metrics prometheus grafana monitoring rmf smf zos zowe json api

zebra's Introduction

ZEBRA - Open Source API for Enhancing RMF Metrics

License Information

This program and the accompanying materials are made available under the terms of the Eclipse Public License v2.0 which accompanies this distribution, and is available at https://www.eclipse.org/legal/epl-v20.html

SPDX-License-Identifier: EPL-2.0

Copyright Contributors to the Zowe Project.

About ZEBRA

ZEBRA (Zowe Embedded Browser for RMF and APIs) is an open source incubator project for the Open Mainframe Project©'s Zowe. The main goal of this project is to provide reusable and industry-compliant RMF data in JSON format. The benefit of using JSON is that it is a modern standard that is very attractive to developers. Because of this, there are many applications and use cases for third-party analysis and visualization tools to harvest ZEBRA's metrics.


System Requirements

Distributed Data Server (DDS)

Currently, ZEBRA requires an instance of RMF DDS (GPMSERVE) running on z/OS as the source of its data. You can find out more about setting up the DDS here.

Node.js Version 8

ZEBRA makes use of the Node.js runtime. IMPORTANT: It is imparitive that you are using an instance of Node.js version 8. Any version after 8 is currently not supported. If you are getting an error about parsing or getting the DDS data, this is a likely cause.

Docker (optional)

If you want to get ZEBRA set up as quick as possible, we recommend making use of containerization with Docker. More information below on how to run and build the containerized version of ZEBRA.


Built-in Third Party Support

ZEBRA comes prebuilt with some integrations and frameworks for other software and tools. The following is a list of what is currently supported. All software listed is completely optional and not required for ZEBRA to run, although we strongly recommend taking advantage of these integrations.

Software Integration with ZEBRA
MongoDB Historical Database for RMF III Records
Prometheus Realtime Data Scraping for RMF III Metrics
Grafana Visualization of RMF III Metrics

There is some configuration required in order for these to work with ZEBRA. NOTE: If running ZEBRA using docker-compose, all third party software will be installed with no manual configuration necessary.

Configuring MongoDB

No configuration needed beyond the standard installtion required in order to be compatible with ZEBRA.

Reminder: ZEBRA has to be configured to work with MongoDB.

Configuring Prometheus

After installing Prometheus, locate the prometheus.yml config file. You should clone and edit this file to look similar to

# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "zebra"
  
    metrics_path: "/prommetric"
    scrape_interval: 60s

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["localhost:3090"]

where localhost:3090 is the host and port where ZEBRA is running.

Reminder: ZEBRA has to be configured to work with Prometheus.

Configuring Grafana

Grafana makes use of Prometheus to visualize ZEBRA metrics. Therefore, in order to use Grafana with ZEBRA you must have Prometheus installed and configured first.

After installing and running Grafana, follow this guide on how to add a Data Source. For the source, you want to use the Prometheus instance you sent up before this.

Note: ZEBRA has to be configured to work with Grafana.


Installing ZEBRA

There are currently two ways that you can install ZEBRA: Manual or Docker. We recommend using Docker for the simplest and fastest experience. See below for more details.

Manual Installation

  1. Make sure you have the required system specifications as described here.
  2. (Optional) Install any desired third party software you want to integrate with ZEBRA.
  3. Clone this repository with Git.
git clone [email protected]:zowe/zebra.git
  1. Navigate to the src directory.
cd src
  1. Install the Node.js dependencies needed for ZEBRA to run.
npm install
  1. (Optional) If developing, we recommend downloading the npm package nodemon.
npm install -g nodemon
  1. (Optional) Configure ZEBRA before running for the first time.

This step is not required since you can configure ZEBRA once it is running via the Settings page. However, if you already know how you want to configure everything you can make a copy Zconfig.template.json and name it Zconfig.json. Then, you can change your preferences and configuration following the format described here. Once the application runs, your configuration will already be applied.

  1. (Optional) Add SSL Certificate and Key to src/sslcert directory.

This step is only required running ZEBRA on https.

  1. Run ZEBRA.
node bin/www

For a development environment, you can use:

nodemon bin/www

If successful, you should see the following message:

http server listening at port [PORT]

where PORT is the port number that ZEBRA is configured to run on.

Docker Installation

  1. Make sure you have Docker installed.
  2. Clone this repository with Git.
git clone [email protected]:zowe/zebra.git
  1. (Optional) Configure ZEBRA before running for the first time.

This step is not required since you can configure ZEBRA once it is running via the Settings page. However, if you already know how you want to configure everything you can make a copy Zconfig.template.json and name it Zconfig.json. Then, you can change your preferences and configuration following the format described here. Once the application runs, your configuration will already be applied.

  1. (Optional) Add SSL Certificate and Key to src/sslcert directory.

This step is only required running ZEBRA on https.

  1. Navigate to the src directory.
cd src
  1. Use docker-compose to build the container network, and run ZEBRA.
docker-compose up --build

If successful, you should see the following message somewhere in the output:

http server listening at port [PORT]

NOTE: If you are getting an error regarding port conflicts, you can edit the docker-compose.yml to change the configuration to work with open ports on you machine. It should look like:

version: '3'

services:
  zebra:
    container_name: zebra
    build: .
    restart: always
    ports:
      - '[ZEBRA_PORT]:3090'
    depends_on:
      - mongo
      - prometheus
      - grafana
  mongo:
    container_name: zebra-mongo
    image: mongo:5.0.3
    ports:
      - '[MONGO_PORT]:27017'
    volumes:
      - mongo-data:/data/db
  prometheus:
    container_name: zebra-prometheus
    image: prom/prometheus:v2.30.3
    ports:
      - '[PROMETHEUS_PORT]:9090'
    volumes:
      - prometheus-data:/prometheus/data
      - ./config/prometheus:/etc/prometheus
  grafana:
    container_name: zebra-grafana
    image: grafana/grafana:8.2.2
    ports:
      - '[GRAFANA_PORT]:3000'
    depends_on:
      - prometheus
    volumes:
      - grafana-data:/var/lib/grafana
      - ./config/grafana:/etc/grafana/provisioning/datasources

volumes:
  mongo-data:
  prometheus-data:
  grafana-data:

where [ZEBRA_PORT], [MONGO_PORT], [PROMETHEUS_PORT], and [GRAFANA_PORT] are your desired ports for ZEBRA, MongoDB, Prometheus, and Grafana, respectively.


Configuring ZEBRA's Settings

You can configure ZEBRA in two ways: editing the Zconfig.json file directly, or making use of the Settings page interface once you get the application running.

Field Definitions

General Settings
Field Definition Required
appurl URL or hostname that ZEBRA is using Always
appport Port that ZEBRA is using Always
ppminutesInterval The interval (in minutes) that RMF Postprocessor (RMF Monitor I) records are recorded into the DDS Always
rmf3interval The interval (in seconds) that RMF Monitor III records are recorded into the DDS Always
zebra_httptype The http protocol that ZEBRA is using (http or https) Always
use_cert Specifies whether to use TLS for servicing ZEBRA API (true or false) Always
mongourl URL or hostname of your instance of MongoDB For MongoDB
mongoport Port of your instance of MongoDB For MongoDB
dbinterval The interval (in seconds) that data being recorded into MongoDB For MongoDB
dbname Name of the database to use in MongoDB For MongoDB
useDbAuth Specifies whether to use authentication for MongoDB (true or false) No
dbUser Username for MongoDB if using authentication No
dbPassword Password for MongoDB if using authentication No
authSource Source of MongoDB's authentication (default is admin) No
grafanaurl URL or hostname of your instance of grafana For Grafana
grafanaport Port of your instance of Grafana For Grafana
grafanahttptype The http protocol of your instance of Grafana For Grafana
dds Contains DDS configurations of one or more LPARs. See below to see how to configure this specific field. Always
DDS Settings

Each key in the dds field represents the name of the LPAR you are configuring. For example, if your LPAR is called SLSU, your DDS config may look like:

"SLSU": {
   "ddshhttptype":"https",
   "ddsbaseurl":"salisu.com",
   "ddsbaseport":"8803",
   "ddsauth":"true",
   "ddsuser":"user",
   "ddspwd":"pass",
   "rmf3filename":"rmfm3.xml",
   "rmfppfilename":"rmfpp.xml",
   "mvsResource":",SLSU,MVS_IMAGE",
   "PCI": 3340,
   "usePrometheus":"true",
   "useMongo": "false"
}
Field Definition Required
ddshhttptype The http protocal that this DDS service is using (http or https) Always
ddsbaseurl URL or host name of this DDS service Always
ddsbaseport Port of this DDS service Always
ddsauth Specifies whether this DDS service uses authentication (true or false) No
ddsuser Username to access this DDS (if ddsauth is true) No
ddspwd Password to access this DDS (if ddsauth is true) No
rmf3filename File name and extension used when DDS RMF service sends RMF Monitor III records to its Web API (default value is rmfm3.xml) Always
rmfppfilename File name and extension used when DDS RMF service sends RMF Monitor I (Postprocessor) records to its Web API (default value is rmfpp.xml) Always
mvsResource The default resource to query when making requests to this DDS Always
PCI The PCI value of the mainframe Always
usePrometheus Specifies whether this DDS service should make use of Prometheus data scraping (true or false) For Prometheus
useMongo Specifies whether this DDS service should store RMF III records in a MongoDB database (true or false) For MongoDB

Config File

The Zconfig.json file should be located in the src/config directory. In this directory, there is a Zconfig.template.json which is an example of what yours could look like:

{
    "mongourl":"localhost",
    "dbinterval":"100",
    "dbname":"zebraDB",
    "appurl":"localhost",
    "appport":"3090",
    "mongoport":"27017",
    "ppminutesInterval":"30",
    "rmf3interval":"100",
    "zebra_httptype":"https",
    "useDbAuth":"true",
    "dbUser":"user",
    "dbPassword":"pass",
    "authSource":"admin",
    "useMongo":"true",
    "use_cert": "false",
    "grafanaurl":"localhost",
    "grafanaport":"9000",
    "grafanahttptype": "http",
    "dds": {
        "SLSU": {
            "ddshhttptype":"https",
            "ddsbaseurl":"salisu.com",
            "ddsbaseport":"8803",
            "ddsauth":"true",
            "ddsuser":"user",
            "ddspwd":"pass",
            "rmf3filename":"rmfm3.xml",
            "rmfppfilename":"rmfpp.xml",
            "mvsResource":",SLSU,MVS_IMAGE",
            "PCI": 3340,
            "usePrometheus":"true",
            "useMongo": "false"
        },
        "JSTN": {
            "ddshhttptype":"http",
            "ddsbaseurl":"justin.com",
            "ddsbaseport":"8803",
            "ddsauth":"true",
            "ddsuser":"user",
            "ddspwd":"pass",
            "rmf3filename":"rmfm3.xml",
            "rmfppfilename":"rmfpp.xml",
            "mvsResource":",JSTN,MVS_IMAGE",
            "PCI": 3340,
            "usePrometheus":"false",
            "useMongo": "true"
        }
    }
}

You can edit this file directly with your specifications. NOTE: Once you save the changes, a restart of ZEBRA is required.

Settings Page

As an alternative to editing the Zconfig.json file directly, you could make use of the Settings page in a browser once the application is up and running. You can find the page using the Navbar in the browser:

Config > Settings

Alternatively, you can go to the page directly using the link http://localhost:3090/config/settings where you localhost is your ZEBRA hostname and 3090 is your ZEBRA port.

On this page, you can input and edit the same configuration fields as described previously for both General Settings and DDS Settings.

NOTE: If you make configuration changes through this method, a restart of ZEBRA is not required.


ZEBRA API

Here, you will find documentation on ZEBRA's API and how to make the most out of each query. A full interactive Swagger doc of the API can also be found in the /apis route of the application.

RMF Postprocessor (Monitor I) Reports

RMF Postprocessor reports offer historical records. These reports' intervals are longer than that of RMF Monitor III, and previous records are stored for a set amount of time (usually around 2 weeks).

List of Supported Postprocessor Reports

These report types are confirmed to be parsable by ZEBRA. There may be some report types not listed here that still work correctly, however. If you find a working report that is not listed, please reach out and we will add it below.

Each report links to its official IBM© documentation.

Report Description
CACHE Cache Subsystem Activity
CF Coupling Facility Activity
CHAN Channel Path Activity
CPU Channel Path Activity
CRYPTO Crypto Hardware Activity
DEVICE Device Activity
EADM Extended Asynchronous Data Mover Activity
HFS Hierarchical File System Statistics
IOQ I/O Queuing Activity
OMVS OMVS Kernal Activity
PAGESP Page Data Set Activity
PAGING Paging Activity
SDELAY Serialization Delay
VSTOR Virtual Storage Activity
WLMGL Workload Activity
XCF Cross-System Coupling Facility Activity

Additionally, when querying these reports with ZEBRA, you can append special parameters to the report as you would in the DDS. For example, instead of just using WLMGL, you could use WLMGL(SCPER, RCLASS) to breakdown the service classes by period and include report classes.

Request Format

To get a Postprocessor report in ZEBRA format, make a GET request to the route /v1/{lpar}/rmfpp/{report}.

The route has the following parameters:

Parameter Description
lpar Name of the reporting LPAR
report RMF Postprocessor report type (see list)

You can add additional query strings to the request for more options:

Option Description
start Specifies the start date for the report's interval (If missing, defaults to current date). NOTE: If start is defined, end must be as well.
end Specifies the end date for the report's interval (If missing, defaults to current date). NOTE: If end is defined, start must be as well.
Examples

The following examples use the ZEBRA demo found at https://zebra.talktothemainframe.com:3390/.

Request Description
https://zebra.talktothemainframe.com:3390/v1/RPRT/rmfpp/CPU Gets the list of CPU Activity reports for the current date so far.
https://zebra.talktothemainframe.com:3390/v1/RPRT/rmfpp/CHAN?start=2021-11-09&end=2021-11-11 Gets the list of Channel Path Activity reports from November 9, 2021 to November 11, 2021. NOTE: These dates are most likely outdated since Postprocessor reports only go back a limited amounted of time. Try changing the dates to those within the last week.
https://zebra.talktothemainframe.com:3390/v1/RPRT/rmfpp/WLMGL Gets the list of Workload Activity reports for the current date so far.
https://zebra.talktothemainframe.com:3390/v1/RPRT/rmfpp/WLMGL(SCPER,RCLASS) Adds additional parameters to the previously listed request. The SCPER parameter breaks down service classes to periods and the RCLASS adds report classes to the report.

RMF Monitor III Reports

RMF Monitor III reports offer near realtime records. These reports' intervals are much shorter than that of RMF Postprocessor. With Monitor III, you can only query the current Monitor III data, unlike Postprocessor records that are stored for some time after they are generated. To store Monitor III records, we recommend using the MongoDB integration.

List of Supported Monitor III reports

These report types are confirmed to be parsable by ZEBRA. There may be some report types not listed here that still work correctly, however. If you find a working report that is not listed, please reach out and we will add it below.

Each report links to its official IBM© documentation.

Report Description
CHANNEL Channel Path Activity
CPC CPC Capacity
DELAY Delay
DEV Device Delays
DEVR Device Resource Delays
DSND Data Set Delays
EADM Extended Asynchronous Data Mover Activity
ENCLAVE Enclave
ENQ Enqueue Delays
HSM Hierarchical Storage Manager Delays
JES Job Entry Subsystem Delays
OPD OMVS Process Data
PROC Processor Delays
PROCU Processor Usage
STOR Storage Delays
STORC Common Storage
STORCR Common Storage Remaining
SYSINFO System Information
SYSSUM Sysplex Summary
USAGE Monitor III Job Usage
Request Format

To get a Monitor III report in ZEBRA format, make a GET request to the route /v1/{lpar}/rmfm3/{report}.

The route has the following parameters:

Parameter Description
lpar Name of the reporting LPAR
report RMF Monitor III report type (see list)

You can add additional query strings to the request for more options:

Option Description
resource Specifies the resource to query for the reports (default is mvsResource defined in general settings)
Examples

The following examples use the ZEBRA demo found at https://zebra.talktothemainframe.com:3390/.

Request Description
https://zebra.talktothemainframe.com:3390/v1/RPRT/rmf3/CPC Gets the most recent CPC Capacity report.
https://zebra.talktothemainframe.com:3390/v1/RPRT/rmf3/SYSINFO Gets the most recent System Information report.
https://zebra.talktothemainframe.com:3390/v1/RPRT/rmf3/SYSSUM?resource=,VIPLEX,SYSPLEX Gets the most recent Sysplex Summary report from the ,VIPLEX,SYSPLEX resource.

Individual RMF Metrics

ZEBRA can also individually retrieve and parse certain RMF metrics defined by the DDS.

List of Supported RMF Metrics

To see a list of what metrics are available in your system, use the /v1/{lpar}/rmf?id=LIST API route (where lpar is the reporting LPAR). You can also include a resource query option to see the metrics for different resources (default is mvsResource defined in configuration).

Request Format

To get an individual RMF metric from ZEBRA, make a GET request to the route /v1/{lpar}/rmf?id={metricId}.

The route has the following parameters:

Parameter Description
lpar Name of the reporting LPAR
metricId ID of the RMF metric (the list of available metric IDs and descriptions can be found here)

You can add additional query strings to the request for more options:

Option Description
resource Specifies the resource to get the metric from
Examples

The following examples use the ZEBRA demo found at https://zebra.talktothemainframe.com:3390/.

Request Description
https://zebra.talktothemainframe.com:3390/v1/RPRT/rmf?id=LIST Lists the RMF metric IDs and their description in the default resource.
https://zebra.talktothemainframe.com:3390/v1/RPRT/rmf?id=LIST&resource=,VIPLEX,SYSPLEX Lists the RMF metric IDs and their description in the ,VIPLEX,SYSPLEX resource.
https://zebra.talktothemainframe.com:3390/v1/RPRT/rmf?id=8D0160 Gets most recent value for '% delay' (ID: 8D0160) from the default resource.
https://zebra.talktothemainframe.com:3390/v1/RPRT/rmf?id=8D0160&resource=,VIPLEX,SYSPLEX Gets most recent value for '% delay' (ID: 8D0160) from the ,VIPLEX,SYSPLEX resource.

Exposing RMF Data to Prometheus

ZEBRA comes built with an API and framework that allows for the creation of realtime Prometheus metrics, with RMF Monitor III. When the application is run for the first time, a metrics.json file is created in the src directory. This is where ZEBRA will store the custom Prometheus metrics that you define. While you can edit this file directly with your metric configuration, it is recommended to use the API. For complete documentation on the API, check out the Swagger page on the /apis route.

Custom Metric Format

Before getting into the API calls, it is important to understand how ZEBRA formats these custom metrics. In the src directory, there is a metrics.template.json that serves as an example of what the metrics should look like:

{
    "RPRT_QCK2_PTOU": {
        "lpar": "RPRT",
        "request": {
            "report": "CPC",
            "resource": ",RPRT,MVS_IMAGE"
        },
        "identifiers": [
            {
                "key": "CPCPPNAM",
                "value": "QCK2"
            }
        ],
        "field": "CPCPPTOU",
        "desc": "Physical total utilization for the QCK2 partition."
    },
    "RPRT_TRNG_PTOU": {
        "lpar": "RPRT",
        "request": {
            "report": "CPC",
            "resource": ",RPRT,MVS_IMAGE"
        },
        "identifiers": [
            {
                "key": "CPCPPNAM",
                "value": "TRNG"
            }
        ],
        "field": "CPCPPTOU",
        "desc": "Physical total utilization for the TRNG partition."
    },
    "RPRT_VIDVLP_PTOU": {
        "lpar": "RPRT",
        "request": {
            "report": "CPC",
            "resource": ",RPRT,MVS_IMAGE"
        },
        "identifiers": [
            {
                "key": "CPCPPNAM",
                "value": "VIDVLP"
            }
        ],
        "field": "CPCPPTOU",
        "desc": "Physical total utilization for the VIDVLP partition."
    },
    "RPRT_VIRPT_PTOU": {
        "lpar": "RPRT",
        "request": {
            "report": "CPC",
            "resource": ",RPRT,MVS_IMAGE"
        },
        "identifiers": [
            {
                "key": "CPCPPNAM",
                "value": "VIRPT"
            }
        ],
        "field": "CPCPPTOU",
        "desc": "Physical total utilization for the VIRPT partition."
    }
}

Each top-level key in the JSON is the name of the Prometheus metric. You can name the metrics how ever you like, there is no strict convention.

Field Definition
lpar The name of the reporting LPAR.
request Object that contains info about the request needed to get the data. The requests are RMF Monitor III, so you must specify a report type to call. Optionally, you can provide a resource target. If no resource is provided, then the default mvsResource specified in configuration will be used.
identifiers Array of key-value pairs that are used as conditions to get the data of the appropriate entity. For example, if you want the total physical utilization of only the partition with the name of QCK2, you can set key to be CPCPPNAM (partition name) and value to QCK2. Since identifiers is an array, you can add as many key-value pairs as needed for multiple conditions. Can be left empty [] if not needed.
field The field whose value is used as the Prometheus metric.
desc Optionally, you can provide a description for readability on what the metric is tracking.
Creating a Prometheus Metric

To initialize a new custom Prometheus metric, make a POST request to /v1/metrics/{metricName}, where metricName is the name of your new custom metric. This POST request should have a body with the format of metric. Here is an example:

Request:

POST https://zebra.talktothemainframe.com:3390/v1/metrics/RPRT_QCK2_PTOU

Request Body:

{
   "lpar": "RPRT",
   "request": {
       "report": "CPC",
       "resource": ",RPRT,MVS_IMAGE"
   },
   "identifiers": [
       {
           "key": "CPCPPNAM",
           "value": "QCK2"
       }
   ],
   "field": "CPCPPTOU",
   "desc": "Physical total utilization for the QCK2 partition."
}

Response:

{
    "msg": "Metrics were successfully created.",
    "err": false
}
Retrieving a Prometheus Metric

To retrieve a custom Prometheus metric, make a GET request to /v1/metrics/{metricName}, where metricName is the name of a custom metric that already exists. If you do not provide a metric name, it will list all current Prometheus metrics. Here is an example:

Request:

GET https://zebra.talktothemainframe.com:3390/v1/metrics/RPRT_QCK2_PTOU

Response:

{
    "data": {
        "lpar": "RPRT",
        "request": {
            "report": "CPC",
            "resource": ",RPRT,MVS_IMAGE"
        },
        "identifiers": [
            {
                "key": "CPCPPNAM",
                "value": "QCK2"
            }
        ],
        "field": "CPCPPTOU",
        "desc": "Physical total utilization for the QCK2 partition."
    },
    "msg": "Metric 'RPRT_QCK2_PTOU' successfully retrieved",
    "err": false
}
Updating a Prometheus Metric

To update a custom Prometheus metric, make a PUT request to /v1/metrics/{metricName}, where metricName is the name of a custom metric that already exists. Here is an example:

Request:

PUT https://zebra.talktothemainframe.com:3390/v1/metrics/RPRT_QCK2_PTOU

Request Body:

{
   "lpar": "RPRT",
   "request": {
       "report": "CPC",
       "resource": ",RPRT,MVS_IMAGE"
   },
   "identifiers": [
       {
           "key": "CPCPPNAM",
           "value": "QCK2"
       }
   ],
   "field": "CPCPLTOU",
   "desc": "Logical total utilization for the VIRPT partition."
}

Response:

{
    "msg": "Metric was successfully updated.",
    "err": false
}
Deleting a Prometheus Metric

To delete a custom Prometheus metric, make a DELETE request to /v1/metrics/{metricName}, where metricName is the name of a custom metric that already exists. Here is an example:

Request:

DELETE https://zebra.talktothemainframe.com:3390/v1/metrics/RPRT_QCK2_PTOU

Response:

{
    "msg": "Metric 'RPRT_QCK2_PTOU' was successfully deleted.",
    "err": false
}

Support

For any questions or help with any aspect of ZEBRA, you can contact the development team directly or open an issue on GitHub. For Slack users, there is a channel for ZEBRA in the Open Mainframe Project©'s workspace that you can use to get in touch with the team and community! We greatly appreciate any feedback or suggestions!

Name Role Contact
Alex Kim Project Lead [email protected]
Salisu Ali Developer [email protected]
Justin Santer Developer [email protected]

zebra's People

Contributors

behives avatar jackjia-ibm avatar jsanter27 avatar salisbuk7897 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zebra's Issues

Not getting any data back from DDS

Hi,
First of all - great work guys!...have installed ZEBRA under zCX (zOS 2.4). Updated the ZEBRA config file to point to our DDS. When I try either "Retrieve RMF III Report In JSON or "Retrieve RMF III Workload Report In JSON", I get no data back and the following message in my browser:
image
.
When I go to the zOS system where the RMF DDS server is running and do a NETSTAT, I can see a connection from the zCX container where ZEBRA is running.
.
Is there a ZEBRA log to see what the error may be?

Would love to get this working as it has great potential and any assistance would be appreciated

Roger

Zebra - resume status

Alex, Salis, Justin, is not an issue but I don´t find a way to communicate (my ignorance about github) to resume what we have done. I will close it immediatly
you acknowledge it. Thanks

github_zebra.pdf

Update package.json to use static versions

Build pipeline is failing and throwing more errors:

Run zowe-actions/shared-actions/validate-package-json@main
Run /home/runner/work/_actions/zowe-actions/shared-actions/main/validate-package-json/validate-package-json.sh ""
Safe release date is before 2022-02-03T16:19:49.000Z

=======================================================================
>>>>>>>> ./src/package.json
----------------------------------------------------
Validate static dependency versions:
- dependencies
  * axios@^0.20.0
Error: axios@^0.20.0 is not imported with static version.
Error: Process completed with exit code 1.

Looks like package.json needs to be updated so that the dependency versions are static and not relative. I think this means that there shouldn't be symbols like '^' (as seen in the axios dependency above) or '~' in the version numbers. Would it be sufficient to just remove all of these symbols?

Design class documentation for new TypeScript formats for RMF reports

Goal

Since we are moving to TypeScript, we need to strictly define types for the data that is returned from the RMF reports parsed from the DDS. Below lists which reports need to have class diagrams documented.

RMF Postprocessor Reports

  • CACHE
  • CF
  • CHAN
  • CPU
  • CRYPTO
  • DEVICE
  • EADM
  • HFS
  • IOQ
  • OMVS
  • PAGESP
  • PAGING
  • SDELAY
  • VSTOR
  • WLMGL
  • XCF

RMF Monitor III Reports

  • CHANNEL
  • CPC
  • DELAY
  • DEV
  • DEVR
  • DSND
  • EADM
  • ENCLAVE
  • ENQ
  • HSM
  • JES
  • OPD
  • PROC
  • PROCU
  • STOR
  • STORC
  • STORCR
  • SYSINFO
  • USAGE

[BUG] Main Router seems to be redirecting incorrectly.

Describe the bug
When I try to access some of the pages described in page 28 of the PDF document, unexpected behavior occurs for the listed urls in mainRouter.

/prommetric does not list any metrics despite being placed
/settings and /addsettings gives a 500 error.

To Reproduce
Steps to reproduce the behavior:

  1. Go to /[prommetric, settings, addsettings]

Expected behavior
/prommetric should list Prometheus metrics
/settings and /addsettings should direct to the pages for settings and to add settings.

Screenshots
If applicable, add screenshots to help expla
Screen Shot 2022-06-17 at 12 40 16 PM
Screen Shot 2022-06-17 at 12 41 21 PM
Screen Shot 2022-06-17 at 12 41 54 PM
in your problem.

Desktop (please complete the following information):

  • OS: macOS 12.4
  • Browser: Safari
  • Version 15.5

Additional context
I work with Eric who opened up #101 and we are trying to get the Docker version of Zebra working by itself as we already have an existing Prometheus and Grafana instance running. I had taken out Mongo, Prometheus and Grafana from the docker-compose.yml so those systems are currently not running. I have attached (with PII removed) the Zconfig.json, the modded docker-compose.yml and our metrics.json.

Thank you for this awesome tool!

docker-compose.txt
metrics.txt
Zconfig.txt

[SUGGESTION] Support SMF instrumentation data stored in databases

Is your feature request related to a problem? Please describe.
Companies struggle to find a way to quickly get insights to speed up initiatives aligned to business. Zebra is capable to infuse SMF instrumentation data for RMF, store in a TDS and present in Grafana. I zebra to collect SMF data stored in databases via SQL exporter for other SMF data types other than 70 to 79 for RMF I and III.

Describe the solution you'd like

  1. Configure DB2 Connect to reach a DB2 Database with SMF data stored
  2. Setup Prometheus Java JMX exporter agent to scrap from DB2 Connect Server
  3. Configure Grafana presentation Layer

Describe alternatives you've considered
Might use different Databases and methods to infuse and store the data like ELK

Additional context
image

Support for RMF PP CPU Activity Report

Currently when Zebra takes RMF Postprocessor data for CPU reports, it only takes 'Partition Data Report' section and parse to JSON. In order to completely provide system data, it also should include 'CPU Activity' report section along with additional header information (RMF/SMF data version).

Prometheus data scraping using 'http' instead of 'https'

Prometheus is calling the '/prommetric' route correctly, however the corresponding ZEBRA request to get the appropriate data always uses 'http' even if 'https' is the configured protocol. This leads to no data being fed to Prometheus.

_TOU and _EFU metrcis do not appear and _MSU and _CHANNEL no values

Hi, now I am running with zebra-salisu_dev and is scrapping ok, and can build dashboards in Grafana but in the process I find the following:

  1. scrapping process in zebra (OK)

GET /prommetric 200 6.218 ms - -
GET /prommetric 200 6.218 ms - -
GET /v1/PC1B/rmf3/CPC 200 7649.474 ms - 1280
GET /v1/PC1B/rmf3/CPC 200 7649.474 ms - 1280
GET /v1/PC1B/rmf3/SYSINFO 200 8738.674 ms - 23783
GET /v1/PC1B/rmf3/SYSINFO 200 8738.674 ms - 23783
GET /v1/PC1B/rmf3/USAGE 200 8760.418 ms - 34383
GET /v1/PC1B/rmf3/USAGE 200 8760.418 ms - 34383
GET /v1/PC1B/rmf3/CHANNEL 200 9898.490 ms - 99812
GET /v1/PC1B/rmf3/CHANNEL 200 9898.490 ms - 99812

  1. When I search for the metrics in Grafana (or in Prometheus url)

TYPE Status

TOU Value The metric doesn´t appears
EFU Value The metric doesn´t appears
MSU Value The metric appears with no value
VC Value OK,the metric appears with value
CHANNEL Value The metric appears with no value
JOB Value OK,the metric appears with value

any idea which the problem could be?

Thanks.

ZEBRA user admin - initial user/password

I downloaded the new dev version but when I try the user Admin/Admin it fails, which is the default user/pswd, because I don´t find in the documentation how to solve it. Thanks

[SUGGESTION] Determine SSO for DDS

To leverage SSO via the Zowe APIML, we need to understand how SSO can work for DDS. Currently, Zebra sends username and password to authenticate when retrieving metrics, we need to understand what other options there are, so we can know if APIML can be used for SSO. For example, can passtickets be sent?

No prometheus metrics

Hi, I am working with zebra_dev and is running OK (no more parsing errors) so:

install zebra_dev, OK
install Prometheus, OK; and connecting to Zebra OK
install and running Grafana, OK; and sourcing from Prometheus OK
DDS running OK and access to RMF Data Portal, OK
Testing from zebra to DDS OK

but I don´t find any metric in Prometheus even though I see that it is scrapping

from zebra log I can see that it is been scrapped

GET /prommetric 200 0.494 ms - -
GET /prommetric 200 0.692 ms - -
GET /prommetric 200 1.309 ms - -
GET /prommetric 200 0.633 ms - -
GET /prommetric 200 0.678 ms - -
GET /prommetric 200 2.909 ms - -

from Prometheus, it looks UP (Targets) and running. I have read the documentation n times but don´t see where the problem is, because no error messages.

Any idea of the problem, it seems to be between prometheus and zebra.

THANKS!

.QUESTION]

I’ve just installed ZOWE ZEBRA using Docker and I’m trying to login to the Config/Metrics settings at localhost:3090. A Login page is displayed but I don't see login credentials in the documentation. Is there a default Zebra Username and Password?

User account for Grafana (zebra.talktothemainframe.com:3000/login)

Hi,

I am planning to use zebra in our production system to gathering and drawing our RFM data on Grafana. I know is an "incubation project" but seems so mature. I am very impressed. I am very intereted in CPU stadistics and would be great if I could access the demo site to give me an idea.

Thanks a lot!!!

Access MongoDb as TSDB

Hi, zebra is working fine and I am using it through Grafana in 3 differents ways

  1. prometheus (classic way): it works ok
  2. zebra: accessing json information of zebra via the datasource infinity, it works fine and permit to build a resume of instant RMF III metrics (the relevants) in only one dashboard; it works ok
  3. mongoDB: here is a gold mine of metrics of enormous value, but it is difficult to exploit (because of my null knowledge of mongoDB), I have installed a mongoDB datasource (https://github.com/JamesOsgood/mongodb-grafana, not the enterprise) and can access the info but it is difficult to met to convert to a TSDB format (time, metric, value) because there are vectors and arrays in a field where I want to recover only one cell.
    As I can see from the zebra panel (Browse RMF data from MongoDB) you are processing this info; is there a way, in mongodb processing language, to convert this info in TSDB format, for CPC or USAGE metrics for example?

Thanks.

PD: which version of zebra you recommend to use?, I am using salisu_dev.

[BUG] No prometheus recording

Describe the bug
The last main version, in two different installations (windows w/o containers and linux under containers) are not recording in Prometheus (but is recording successfully in MongoDB), below the Zconfig.json file used (IP numbers are changed intentionally)

{
"dds" : {
"PC1B": {
"ddshhttptype":"http",
"ddsbaseurl":"99.999.999.99",
"ddsbaseport":"8803",
"ddsauth":"false",
"ddsuser":"ANY",
"ddspwd":"ANY",
"rmf3filename":"rmfm3.xml",
"rmfppfilename":"rmfpp.xml",
"mvsResource":",PC1B,MVS_IMAGE",
"PCI": 3543,
"usePrometheus":"true",
"useMongo":"true"
},
"DD1B" : {
"ddshhttptype":"http",
"ddsbaseurl":"99.999.999.99",
"ddsbaseport":"8803",
"ddsauth":"false",
"ddsuser":"ANY",
"ddspwd":"ANY",
"rmf3filename":"rmfm3.xml",
"rmfppfilename":"rmfpp.xml",
"mvsResource":",DD1B,MVS_IMAGE",
"PCI": 3543,
"usePrometheus":"true",
"useMongo":"true"
}
},
"ppminutesInterval":"30",
"rmf3interval":"100",
"use_cert": "false",
"zebra_httptype":"http",
"appurl":"localhost",
"appport":"3090",
"mongourl":"localhost",
"dbinterval":"100",
"dbname":"Zebrav1111",
"mongoport":"27017",
"useDbAuth":"false",
"dbUser":"myUserAdmin",
"dbPassword":"salisu",
"authSource":"admin",
"grafanaurl":"localhost",
"grafanaport":"3000",
"grafanahttptype": "http",
"apiml_http_type" : "https",
"apiml_IP" : "localhost",
"apiml_port" : "10010",
"apiml_auth_type" : "bypass",
"apiml_username" : "username",
"apiml_password" : "password"
}

from log the zebra it seems like prometheus is scrapping well

GET /v1/DD1B/rmf3/SYSSUM?resource=%22,,SYSPLEX%22 200 5170.077 ms - 8575
Workload Updated Successflly
GET /prommetric 200 1.916 ms - -
GET /v1/DD1B/rmf3/CPC 200 5192.697 ms - 7927
GET /v1/DD1B/rmf3/PROC 200 5198.601 ms - 8347
PROC Updated Successflly
CPC Updated Successflly
GET /v1/DD1B/rmf3/SYSINFO 200 5520.221 ms - 16353
GET /v1/DD1B/rmf3/USAGE 200 6156.523 ms - 54096
USAGE Updated Successflly
GET /v1/DD1B/rmf3/SYSSUM?resource=%22,,SYSPLEX%22 200 5183.854 ms - 8572
Workload Updated Successflly
GET /v1/PC1B/rmf3/CPC 200 49482.957 ms - 7930
CPC Updated Successflly
GET /v1/PC1B/rmf3/PROC 200 50137.166 ms - 26708
GET /v1/PC1B/rmf3/SYSINFO 200 50133.833 ms - 19518
PROC Updated Successflly
GET /v1/PC1B/rmf3/USAGE 200 51419.914 ms - 80300
USAGE Updated Successflly
GET /v1/DD1B/rmf3/SYSSUM?resource=%22,,SYSPLEX%22 200 5517.269 ms - 8572
Workload Updated Successflly
GET /prommetric 200 0.796 ms - -
GET /v1/DD1B/rmf3/PROC 200 5190.204 ms - 7953
GET /v1/DD1B/rmf3/CPC 200 5208.616 ms - 7927
PROC Updated Successflly
CPC Updated Successflly
GET /v1/DD1B/rmf3/SYSINFO 200 5509.705 ms - 16354
GET /v1/DD1B/rmf3/USAGE 200 6463.056 ms - 54094
USAGE Updated Successflly
GET /v1/DD1B/rmf3/SYSSUM?resource=%22,,SYSPLEX%22 200 5196.712 ms - 8572
Workload Updated Successflly
GET /v1/PC1B/rmf3/CPC 200 22203.334 ms - 7930
CPC Updated Successflly
GET /v1/PC1B/rmf3/SYSINFO 200 22494.462 ms - 19530
GET /v1/PC1B/rmf3/PROC 200 22839.636 ms - 25454
PROC Updated Successflly
GET /v1/PC1B/rmf3/USAGE 200 23765.883 ms - 81246
USAGE Updated Successflly
GET /v1/DD1B/rmf3/SYSSUM?resource=%22,,SYSPLEX%22 200 5212.415 ms - 8572
Workload Updated Successflly
GET /prommetric 200 0.713 ms - -
GET /v1/DD1B/rmf3/PROC 200 5224.576 ms - 6750
GET /v1/DD1B/rmf3/CPC 200 5239.796 ms - 7927
PROC Updated Successflly
CPC Updated Successflly
GET /v1/DD1B/rmf3/SYSINFO 200 5521.973 ms - 16320
GET /v1/DD1B/rmf3/USAGE 200 6518.183 ms - 54094
USAGE Updated Successflly
GET /v1/DD1B/rmf3/SYSSUM?resource=%22,,SYSPLEX%22 200 5176.130 ms - 8570
Workload Updated Successflly
GET /v1/PC1B/rmf3/CPC 200 11166.080 ms - 7930
CPC Updated Successflly
GET /v1/PC1B/rmf3/SYSINFO 200 11460.175 ms - 19530
GET /v1/PC1B/rmf3/PROC 200 11793.293 ms - 27067
PROC Updated Successflly
GET /v1/PC1B/rmf3/USAGE 200 13057.265 ms - 85021
USAGE Updated Successflly
GET /v1/DD1B/rmf3/SYSSUM?resource=%22,,SYSPLEX%22 200 5155.525 ms - 8570
Workload Updated Successflly

from prometheus console targets and configurations looks ok and running, but no metric is insert.

To Reproduce
Steps to reproduce the behavior:

Start ZEBRA and Prometheus to run together

Expected behavior
RMF metrics recorded in TSDB prometheus

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: windows 10
  • Browser: chrome
  • Version: 98.0.4758.102 (Official Build) (64-bit)

Smartphone (please complete the following information):

  • NA

Additional context
Add any other context about the problem here.

[BUG] Pipeline 'forever' command breaks relative file paths.

Describe the bug
Since the forever command in the build pipeline is called from the root directory instead of the src directory, there are errors trying to open files that used relative paths. This prevents the application to run and takes down the demo server.

To Reproduce
Steps to reproduce the behavior:

  1. Run the build pipeline when pushing to main branch.

Expected behavior
After deployment, the demo server should be updated and running.

Parsing Error

Hi, I am trying to run Zebra standalone in a LAB environment, I followed these steps

  1. install Zebra, OK
  2. install Prometheus, OK; and connecting to Zebra OK
  3. install and running Grafana, OK; and sourcing from Prometheus OK
  4. DDS running OK and access to RMF Data Portal, OK

But when I try to test through All APIs menu it responds

Parsing Error
Parser unable to parse the data received. Please check if your DDS Service is running or set up correctly (checked, it's OK).

Another,

When I go to the Manage File it responds

Unable to scan uploads directory: Error: ENOENT: no such file or directory, scandir 'C:\Users\FernandoZangari\Documents\GitHub\zebra\src\uploads'

Work environment (Zebra, Grafana, Prometheus) : Windows

I am very excited about this extraordinary solution, please some help is needed. THANKS!

[BUG]Undefined ZEBRA Login?

Describe the bug
Maybe this is a documentation bug, maybe I'm an idiot.

Used the docker version of ZEBRA, and edited the docker-compose.yml file because I have some other services that interfere with the ports. I then copied Zconfig.template.json to Zconfig.json and edited the ports, zOS RMF IP's/logins, etc to match what I know.

docker-compose-build runs pretty clean (don't see any port conflicts...etc.). I do see:
UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 657)

I am on Node.js 8.10.0.

When I point to the ZEBRA browser, I can see my sysplex names and such. However, if I go to Settings -> Config in the GUI, I get hit with a Login screen asking for a username and password. I can't see in the documentation what these should be, or where to set them. I tried:

Server username/password
DD2 username/password
MongoDb username/password (as set in the Zconfig.json file)

None of these work, and get a Login Fail error.

I even tried renaming the Zconfig file so that ZEBRA started without any Zconfig.json file (I checked this because no LPAR was defined in the GUI), and still get the login screen. The documentation makes it sound like this shouldn't be the case.

Expected behavior
I thought that this should not be login-protected, or the documentation should explain how to set the ZEBRA login. I actually don't see anywhere in the documentation where a login is required for ZEBRA itself, just for Mongo/Prometheus/Grafana.

Screenshots
See below for the login failure.

Desktop (please complete the following information):
Zebra running on Ubuntu 20.04 LTS server edition in a Windows Server 2019 Hyper-V Lenovo server.

Desktop is Windows 10 running a Mozilla Firefox browser.
zebra login

Additional context
Add any other context about the problem here.

[SUGGESTION] Create unit tests to be ran during build pipeline

Is your feature request related to a problem? Please describe.
Now that ZEBRA has a functioning build pipeline, we should add unit tests to ensure that builds are robust and complete before being added to production.

Describe the solution you'd like
Using popular testing library Jest, create *.test.js files for the ZEBRA's controller functions.

Describe alternatives you've considered
Other testing libraries include Mocha and Cypress.

Additional context
None

[BUG] latest update for actions specifying node version for deployment didn't work

Describe the bug
A clear and concise description of what the bug is.
Github Actions completed successfully but the 'nvm use ....' command didn't work as deployed server still has complaint on node version.

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'https://zebra.talktothemainframe.com:3390/ '
  2. Click on LPAR and generate any API request on web
  3. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
Error
NODE VERSION INCOMPATIBILITY

This Error Occurs as a Result of DDS TLS Incompatibility with your NodeJS Version

Install NodeJS Version 8.11.2 to Solve This Problem.

No prometheus metrics on zebra_dev

Hi, I am working with zebra_dev and is running OK (no more parsing errors) so:

install zebra_dev, OK
install Prometheus, OK; and connecting to Zebra OK
install and running Grafana, OK; and sourcing from Prometheus OK
DDS running OK and access to RMF Data Portal, OK
Testing from zebra to DDS OK

but I don´t find any metric in Prometheus even though I see that it is scrapping.

Tasks done:

Check prometheus.yml and it is OK (api_version was changed to v1)
Check zconfig.json and it is OK
Customize metrics.json to adjust to our environment

Even though it doesn´t work.

Any idea of the problem, it seems to be between prometheus and zebra.

THANKS!

Using /src/config/Zconfig.json and /src/metrics.json external to initial docker image

I built a base zebra docker image which intent to use with any service instance that I want to create, this image was built with the dockerfile of the site.

I am using the following Docker compose file to override the Zconfig.json without success

version: '3'

services:
  zebra:
    container_name: zebra_teco
    image: src_zebra:latest
    network_mode: "host" 
    build: .
#    restart: always
    ports:
      - '3090:3090'
    volumes:
      - ./zebra/src/config/Zconfig.json:/zebra/src/config/Zconfig.json

Any idea which is the problem?

Thanks.

Two identifiers in metrics.json doesn´t seem to work as expected[BUG]

Describe the bug
I have test the metrics.json with two identifiers but it seems not to work

To Reproduce
I have test the metrics.json with two identifiers but it seems not to work, in the case of channel I use

a) First identifier ==> Type Channel (doesn´t work)
.......
"PC1B_CHACPIVC_FC": {
"lpar": "PC1B",
"request": {
"report": "CHANNEL",
"resource": ",PC1B,MVS_IMAGE"
},
"identifiers": [
{
"key": "CHACPTVC",
"value": "FC"
},
{
"key": "CHACPIVC",
"value": "ALL"
}
],
"field": "CHACPUVC",
"desc": "Channel - Part util %"
},
"PC1B_CHACPIVC_OSD": {
"lpar": "PC1B",
"request": {
"report": "CHANNEL",
"resource": ",PC1B,MVS_IMAGE"
},
"identifiers": [
{
"key": "CHACPTVC",
"value": "OSD"
},
{
"key": "CHACPIVC",
"value": "ALL"
}
],
.....

b) First Channel ID (repeat the metric without selecting the channel type)

.....
"PC1B_CHACPIVC_FC": {
	"lpar": "PC1B",
	"request": {
		"report": "CHANNEL",
		"resource": ",PC1B,MVS_IMAGE"
	},
	"identifiers": [
		{
			"key": "CHACPIVC",
			"value": "ALL"
		},
                {
			"key": "CHACPTVC",
			"value": "FC"
		}
	],
	"field": "CHACPUVC",
	"desc": "Channel - Part util %"
},
"PC1B_CHACPIVC_OSD": {
	"lpar": "PC1B",
	"request": {
		"report": "CHANNEL",
		"resource": ",PC1B,MVS_IMAGE"
	},
	"identifiers": [
		{
			"key": "CHACPIVC",
			"value": "ALL"
		},
                {
			"key": "CHACPTVC",
			"value": "OSD"
		}
	],
.....

Do you have an example of this type of coding because there is none in Zebra site.

Expected behavior
A qualified naming build that facilitate dashboard build

Screenshots
N/A

Desktop (please complete the following information):

  • OS: Windows10
  • Browser [chromei]

Additional context
Do you have an example of this type of coding because there is none in Zebra site.

Unused Zowe dependencies creating issue in build pipeline

The build pipeline keeps failing and throwing the following error:

=======================================================================
>>>>>>>> ./src/package.json
----------------------------------------------------
Validate static dependency versions:
- dependencies
  * @zowe/cli@^6.32.1
Error: @zowe/cli@^6.32.1 is not imported with static version.

[BUG] Cannot read property 'mvsResource' of undefined at RMFIIIJSON

Describe the bug
Hey guys, I have been testing a pipeline deployment and I am seeing some recent issues with pulling RMF III reports. I am not sure what is causing this, and was hoping you guys had some info?

To Reproduce
Steps to reproduce the behavior:

  1. Spin up Zebra stack using Docker
  2. Go to the Zebra web page
  3. Choose any report
  4. The report times out
  5. Zebra logs will report with an error

Expected behavior
I should be able to view the JSON data for the supplied report, instead the webpage times out with no data. I was able to have this working about two weeks ago, but now this has stopped.

Error
I get the following error when viewing the Docker log for zebra:

GET /v1/S04/rmf3/CPC - - ms - -
(node:24) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'mvsResource' of undefined
    at RMFIIIJSON (/zebra/app_server/v1_Controllers/RMF3Controller.js:218:29)
    at module.exports.RMFIII (/zebra/app_server/v1_Controllers/RMF3Controller.js:318:11)
    at Layer.handle [as handle_request] (/zebra/node_modules/express/lib/router/layer.js:95:5)
    at next (/zebra/node_modules/express/lib/router/route.js:137:13)
    at Route.dispatch (/zebra/node_modules/express/lib/router/route.js:112:3)
    at Layer.handle [as handle_request] (/zebra/node_modules/express/lib/router/layer.js:95:5)
    at /zebra/node_modules/express/lib/router/index.js:281:22
    at param (/zebra/node_modules/express/lib/router/index.js:354:14)
    at param (/zebra/node_modules/express/lib/router/index.js:365:14)
    at param (/zebra/node_modules/express/lib/router/index.js:365:14)
    at Function.process_params (/zebra/node_modules/express/lib/router/index.js:410:3)
    at next (/zebra/node_modules/express/lib/router/index.js:275:10)
    at Function.handle (/zebra/node_modules/express/lib/router/index.js:174:3)
    at router (/zebra/node_modules/express/lib/router/index.js:47:12)
    at Layer.handle [as handle_request] (/zebra/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/zebra/node_modules/express/lib/router/index.js:317:13)
(node:24) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 9)
(node:24) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'timestart' of undefined
    at /zebra/mongoV1.js:67:50
    at /zebra/mongoV1.js:87:21
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:189:7)

Configuration:

  • OS: macOS Monterey 12.5.1
  • Browser Safari
  • Version 15.6.1

Server:

  • OS: Ubuntu 20.04
  • Docker: 20.10.12, build 20.10.12-0ubuntu2~20.04.1
  • docker-compose:
docker-compose version 1.25.0, build unknown
docker-py version: 4.1.0
CPython version: 3.8.10
OpenSSL version: OpenSSL 1.1.1f  31 Mar 2020

Zebra Config [Login info redacted]:

{
  "mongourl": "zebra-mongo",
  "dbinterval": "100",
  "dbname": "zebraDB",
  "appurl": "localhost",
  "appport": "3090",
  "mongoport": "27017",
  "ppminutesInterval": "30",
  "rmf3interval": "100",
  "zebra_httptype": "https",
  "useDbAuth": "true",
  "dbUser": "user",
  "dbPassword": "pass",
  "authSource": "admin",
  "use_cert": "false",
  "grafanaurl": "localhost",
  "grafanaport": "9000",
  "grafanahttptype": "http",
  "dds": {
    "S01": {
      "ddshhttptype": "http",
      "ddsbaseurl": "******",
      "ddsbaseport": "8803",
      "ddsauth": "true",
      "ddsuser": "******",
      "ddspwd": "******",
      "rmf3filename": "rmfm3.xml",
      "rmfppfilename": "rmfpp.xml",
      "mvsResource": "******",
      "PCI": 3340,
      "usePrometheus": "true",
      "useMongo": "true"
    }
  }
}

docker-compose.yaml

version: '3'
services:
  zebra:
    container_name: zebra
    build: ~/zebra/src
    restart: always
    ports:
      - '3090:3090'
    depends_on:
      - mongo
      - prometheus
      - grafana
  mongo:
    container_name: zebra-mongo
    image: mongo:5.0.3
    ports:
      - '27017:27017'
    volumes:
      - mongo-data:/data/db
  prometheus:
    container_name: zebra-prometheus
    image: prom/prometheus:v2.30.3
    ports:
      - '9090:9090'
    volumes:
      - prometheus-data:/prometheus/data
      - ~/zebra/src/config/prometheus:/etc/prometheus
  grafana:
    container_name: zebra-grafana
    image: grafana/grafana:8.2.2
    ports:
      - '3000:3000'
    depends_on:
      - prometheus
    volumes:
      - grafana-data:/var/lib/grafana
      - ~/zebra/src/config/grafana:/etc/grafana/provisioning/datasources
  json_exporter:
    image: prometheuscommunity/json-exporter
    container_name: json_exporter
    volumes:
      - ./json-exporter/config.yml:/config.yml:ro
    ports:
      - '7979:7979'
    depends_on:
      - zebra
volumes:
  mongo-data:
  prometheus-data:
  grafana-data:

Additional context
I first thought it was the mongoURL, so I changed the URL to the context of the docker network. All of the hosts should be using the same underlying network. I have confirmed that the DDS URL, username and password work.

Add context sensitive values into the metrics panel so that metrics do not have to be manually created and this would minimize error in the contruction

Hello,

Right now, when using the Metrics web page to try and create a metrics entry, it does not show anything except for 'select' and 'all' for options. Since there is an open issue with 'all' currently it would be very helpful to have context aware selections populate each field so the web page can be used. and create something usable.

Thanks!

User account for Grafana (zebra.talktothemainframe.com:3000/login)

Hi!!

Sorry for open this again but I am very interested in see Grafana demo dashboards.

@behives sent me next link to request an account for demo site:

https://openmainframeproject.slack.com/archives/C01QWBJG3A4

But I can't log in that slack channel because I haven't an email with one of the allowed domains.

Please can you help me? Can you send me account for zebra.talktothemainframe.com:3000/login to netamego gmail.com?

Thank you very much!!!

Best regards.

MongoDb connection error

when launching with MongoDB service enabled it generates errors - probably due to some mongoose components recent updates...

(node:3811) UnhandledPromiseRejectionWarning: MongoError: command create requires authentication
at Connection. (/home/zebra/zebra_new_api/zebra/src/node_modules/mongodb/lib/core/connection/pool.js:451:61)
at emitTwo (events.js:126:13)
at Connection.emit (events.js:214:7)
at decompress (/home/zebra/zebra_new_api/zebra/src/node_modules/mongodb/lib/core/connection/connection.js:493:10)
at Inflate.onEnd (zlib.js:131:5)
at emitNone (events.js:111:20)
at Inflate.emit (events.js:208:7)
at endReadableNT (_stream_readable.js:1064:12)

error retrieving JSON report

Hello,

I just recently brought up zebra for testing and the following errors are showing on the log when trying to retrieve the reports:

GET /rmfm3?reports=SYSINFO - - ms - -
GET /rmfm3?reports=SYSINFO - - ms - -
GET /rmfm3?report=USAGE - - ms - -
GET /rmfm3?report=USAGE - - ms - -
GET /rmfm3?report=CPC - - ms - -
GET /rmfm3?report=CPC - - ms - -
GET /rmfm3?report=PROC 200 20036.297 ms - 35911
GET /rmfm3?report=PROC 200 20036.297 ms - 35911
GET /rmfm3?report=CPC 200 20037.281 ms - 35911
GET /rmfm3?report=CPC 200 20037.281 ms - 35911
GET /rmfm3?report=USAGE 200 20037.913 ms - 35911
GET /rmfm3?report=USAGE 200 20037.913 ms - 35911
GET /rmfm3?reports=SYSINFO 200 20038.396 ms - 35911
GET /rmfm3?reports=SYSINFO 200 20038.396 ms - 35911
GET /rmfm3?reports=SYSSUM&resource=%22,,SYSPLEX%22 200 40.384 ms - 35911
GET /rmfm3?reports=SYSSUM&resource=%22,,SYSPLEX%22 200 40.384 ms - 35911
(node:193034) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'split' of undefined
at fedDatabase (my-folder/src/mongo.js:53:45)
at my-folder/src/mongo.js:137:7
at my-folder/src/mongo.js:32:5
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:193034) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 739)
CPC Updated Successflly
PROC Updated Successflly
USAGE Updated Successflly
GET /rmfm3?reports=SYSSUM&resource=%22,,SYSPLEX%22 200 37.685 ms - 35911
GET /rmfm3?reports=SYSSUM&resource=%22,,SYSPLEX%22 200 37.685 ms - 35911
Workload Updated Successflly
GET /prommetric 200 0.257 ms - -
GET /prommetric 200 0.257 ms - -
GET /prommetric 200 0.361 ms - -
GET /prommetric 200 0.361 ms - -
GET /prommetric 200 0.243 ms - -
GET /prommetric 200 0.243 ms - -
GET /prommetric 200 0.332 ms - -
GET /prommetric 200 0.332 ms - -
GET /prommetric 200 0.305 ms - -
GET /prommetric 200 0.305 ms - -
(node:193034) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'msg' of undefined
at my-folder/src/app_server/Controllers/RMF3Controller.js:234:22
at my-folder/src/app_server/Controllers/RMF3Controller.js:460:9
at my-folder/src/app_server/Controllers/RMF3Controller.js:77:9
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:193034) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 741)
GET /rmfm3?report=PROC 200 18.698 ms - 2
GET /rmfm3?report=PROC 200 18.698 ms - 2
(node:193034) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'split' of undefined
at fedDatabase (my-folder/src/mongo.js:53:45)
at my-folder/src/mongo.js:128:5
at my-folder/src/mongo.js:32:5
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:193034) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 743)
(node:193034) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'toString' of undefined
at Parser.exports.Parser.Parser.parseString (my-folder/src/node_modules/xml2js/lib/parser.js:312:19)
at Parser.parseString (my-folder/src/node_modules/xml2js/lib/parser.js:5:59)
at Object.module.exports.RMF3bodyParser (my-folder/src/app_server/parser/RMFMonitor3parser.js:11:12)
at my-folder/src/app_server/Controllers/RMF3Controller.js:217:27
at my-folder/src/app_server/Controllers/RMF3Controller.js:135:9
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:193034) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 745)
(node:193034) UnhandledPromiseRejectionWarning: ReferenceError: result is not defined
at my-folder/src/app_server/Controllers/RMF3Controller.js:637:12
at my-folder/src/app_server/Controllers/RMF3Controller.js:77:9
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:193034) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 747)

I am not a node expert, so wasn't able to find what could have done wrong when installing it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.