Git Product home page Git Product logo

gitlab-languages's Introduction

gitlab-languages

PyPI - License

Utility to generate a Prometheus data source text file for your GitLab instance using the GitLab Language API

installation from PyPI

  1. Install from PyPI as CLI

    pip install -U gitlab-languages
  2. Run the program

    gitlab-languages --cache cache.json --args owned=True # more info about usage: see below

installation from source

  1. Install python dependencies

    poetry install
  2. Set the required environment variables

    export GITLAB_TOKEN=<SOME_TOKEN_WITH_API_SCOPE>
    export GITLAB_URL=https://gitlab.com # optional, defaults to https://gitlab.com
    # optional:
    export WORKER_COUNT=24
  3. Run the tool

    gitlab-languages

usage

usage: gitlab_languages [-h] [--project_limit PROJECT_LIMIT]
                        [--args ARGS [ARGS ...]]
                        [--groups GROUPS [GROUPS ...]]
                        [--ignore_groups IGNORE_GROUPS [IGNORE_GROUPS ...]]
                        [--cache CACHE] [-o OUTPUT]

optional arguments:
  -h, --help            show this help message and exit
  --project_limit PROJECT_LIMIT
                        Set project limit to scan
  --args ARGS [ARGS ...]
                        Provide custom args to the GitLab API
  --groups GROUPS [GROUPS ...]
                        Scan only certain groups
  --ignore_groups IGNORE_GROUPS [IGNORE_GROUPS ...]
                        Ignore certain groups and their projects
  --cache CACHE         Cache file to use
  -o OUTPUT, --output OUTPUT
                        Location of the metrics file output

additional arguments

You can specify additional arguments, that will be directly supplied to the python-gitlab library or to the GitLab API endpoint. Example:

gitlab-languages --args owned=True

More info about the available additional args can be found here:

example output

The output will look something like this:

metrics.txt

# HELP languages_percent Languages scanned in percent
# TYPE languages_percent gauge
languages_percent{language="Java"} 11.73
languages_percent{language="CSS"} 1.97
languages_percent{language="TypeScript"} 3.5
languages_percent{language="HTML"} 6.14
languages_percent{language="JavaScript"} 17.16
languages_percent{language="Python"} 10.4
languages_percent{language="Modelica"} 3.7
languages_percent{language="TeX"} 1.64
languages_percent{language="Shell"} 6.35
languages_percent{language="Batchfile"} 0.76
languages_percent{language="HCL"} 7.15
languages_percent{language="BitBake"} 0.56
languages_percent{language="C"} 5.25
languages_percent{language="C++"} 0.72
languages_percent{language="Matlab"} 2.77
languages_percent{language="TXL"} 0.05
languages_percent{language="Objective-C"} 1.48
languages_percent{language="XSLT"} 1.68
languages_percent{language="Perl"} 1.71
languages_percent{language="Ruby"} 0.03
languages_percent{language="C#"} 10.3
languages_percent{language="PowerShell"} 0.11
languages_percent{language="Pascal"} 0.01
languages_percent{language="ASP"} 0.0
languages_percent{language="PLpgSQL"} 0.0
languages_percent{language="Makefile"} 2.06
languages_percent{language="SQLPL"} 0.0
languages_percent{language="Puppet"} 0.0
languages_percent{language="Groovy"} 2.56
languages_percent{language="M4"} 0.01
languages_percent{language="Roff"} 0.15
languages_percent{language="CMake"} 0.01
languages_percent{language="NSIS"} 0.01
languages_percent{language="PHP"} 0.0
languages_percent{language="Go"} 0.0
languages_percent{language="Smalltalk"} 0.02
languages_percent{language="Visual Basic"} 0.0
languages_percent{language="Smarty"} 0.0
# HELP languages_scanned_total Total languages scanned
# TYPE languages_scanned_total gauge
languages_scanned_total 38.0
# HELP projects_scanned_total Total projects scanned
# TYPE projects_scanned_total gauge
projects_scanned_total 61.0
# HELP projects_skipped_total Total projects skipped
# TYPE projects_skipped_total gauge
projects_skipped_total 0.0
# HELP projects_no_language_total Projects without language detected
# TYPE projects_no_language_total gauge
projects_no_language_total 39.0
# HELP groups_scanned_total Total groups scanned
# TYPE groups_scanned_total gauge
groups_scanned_total 0.0

Run the script via GitLab CI with schedules and export the metrics.txt file as GitLab pages. Then you can add it to your Prometheus instance as scrape source.

gitlab-languages's People

Contributors

dependabot[bot] avatar max-wittig avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

gln-gln blaisep

gitlab-languages's Issues

Ensure that last_activity_current is json serializable

Crash, when using cache

Traceback (most recent call last):
  File "/builds/***/gitlab-languages/venv/bin/gitlab_languages", line 11, in <module>
    sys.exit(main())
  File "/builds//***//gitlab-languages/venv/lib/python3.7/site-packages/gitlab_languages.py", line 418, in main
    json.dump(language_cache, f, indent=2)
  File "/usr/local/lib/python3.7/json/__init__.py", line 179, in dump
    for chunk in iterable:
  File "/usr/local/lib/python3.7/json/encoder.py", line 431, in _iterencode
    yield from _iterencode_dict(o, _current_indent_level)
  File "/usr/local/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
    yield from chunks
  File "/usr/local/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
    yield from chunks
  File "/usr/local/lib/python3.7/json/encoder.py", line 325, in _iterencode_list
    yield from chunks
  File "/usr/local/lib/python3.7/json/encoder.py", line 438, in _iterencode
    o = _default(o)
  File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type MayaDT is not JSON serializable

Consider add a json output option, plus add more percision to the percentage

It would be nice to have a JSON output option.
Also for a larger set of data, it would be nice to have a larger precision to the actual percentage. Half of our data set is less than 1% which means the data is lost, and although insignificant to the larger picture, it would be great to do a TopN of the 1% specially if the other 99% comprises of only 3-4 languages.

Crash in last stage

Crash

Traceback (most recent call last):
  File "gitlab_languages.py", line 232, in <module>
    main()
  File "gitlab_languages.py", line 227, in main
    args=additional_args_dict
  File "gitlab_languages.py", line 157, in scan_projects
    self.metrics_collector.write(projects_scanned=self.projects_scanned)
  File "gitlab_languages.py", line 68, in write
    registry=self.registry,
  File "/home/projects/gitlab_languages/venv/lib/python3.6/site-packages/prometheus_client/core.py", line 591, in init
    raise ValueError('Invalid metric name: ' + full_name)
ValueError: Invalid metric name: 1CEnterprise

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.