Git Product home page Git Product logo

robottelo-ci's Introduction

robottelo-ci

Jenkins jobs configuration files to be used to run Robottelo against Satellite6, SAM and unit-testing of various foreman projects.

Installing

In order to create the jobs using the YAML descriptions from this repository, first you have to install the requirements:

pip install -r requirements.txt

It will install all required packages. Make sure to have pip installed.

Setup Jenkins Job Builder

After installing the required packages, to setup run ./setup_jjb.sh. This script will setup a local copy of foreman-infra from which macros are used for unit-testing of various projects.

[job_builder]
keep_descriptions=False
include_path=.:scripts:foreman-infra
recursive=True
allow_duplicates=False
exclude=foreman-infra/yaml/jobs

[jenkins]
user=<jenkin-user>
password=<jenkins-api-key>
url=<jenkins-url>

Now update the jenkins credentials section in the jenkins_jobs.ini file, created by the above script.

Contributions

  1. Fork the repository.
  2. Submit the PR to the project.
  3. Reviewers would review the PR's and provide comments.
  4. On successful merge, the robottelo-ci-update job would run and update the job on server.

Generating the jobs

It is better to run ./generate_jobs.sh to test the jobs, before proceeding ahead.

Creating the jobs

Note: Only for Administartor usage. Never submit the job to jenkins

When all above steps are completed, you can update the jobs by running the following command:

./update-job.sh job-name

The above command considers that you are running on this repo root directory and have placed the config file there.

robottelo-ci's People

Contributors

cpeters avatar cswiii avatar devendra104 avatar ehelms avatar elyezer avatar evgeni avatar gauravtalreja1 avatar ichimonji10 avatar ifireball avatar jacobcallahan avatar jameerpathan111 avatar jhutar avatar jyejare avatar kbidarkar avatar lpramuk avatar mirekdlugosz avatar mshriver avatar ntkathole avatar omaciel avatar omkarkhatavkar avatar pcreech avatar pgagne avatar pondrejk avatar rochacbruno avatar rplevka avatar san7ket avatar sghai avatar sthirugn avatar vijay8451 avatar zjhuntin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

robottelo-ci's Issues

Add defaults

Add some defaults values, the initial values should be defined to store a large number of builds and add a description saying that the job is managed by JJB and that any change on the Jenkins UI is going to be overridden when the jobs gets updated.

Give an option to choose latest vs. latest-stable in satellite6-installer

Issue:

Right now, by default, satellite6-installer chooses the following. The problem here is that updating symlinks of latest-stable is manual task and is often getting missed in builds which results in faulty builds

  • latest-stable

Proposed solution:

In satellite6-installer give an option to user to choose between.

  • latest
  • latest-stable

Create a Polarion Test Run prior to running Tiered jobs

We currently wait until our last Tiered job is completed to trigger a Polarion Test Run creation. This Test Run is then populated with the test results obtained from each individual Tiered test's jUnit file. One issue with this approach is that non-stubbed tests are not added to said Test Runs, as they are not contained within the jUnit files.

What we want is to create a Test Run shortly after we update all Test Cases, and include both automated and notautomated Test Cases so that we can then test the 'stubbed' tests manually and therefore make sure that we cover all the features and cases identified by our team.

Once the Test Run is populated at the end of the Tiered jobs, we should have a Test Run containing automated tests and their results, as well as a series of tests that will have to be verified manually.

Improve Satellite 6 Automation Pipeline

The automation pipeline currently runs as follows:

  1. Provisioning
  2. Tier1
  3. Tier2
  4. Tier3
  5. RHAI
  6. Tier4

The proposal is to maintain the same order but create a new job in order to orchestrate the entire pipeline. This way if provision have already completed and still running the tests the next build will happen only when the previous execution finishes.

The new approach will have the advantages:

  • A provisioning will not happen if a testing is running which in some cases hangs the running test job.
  • Will be ensured to complete the pipeline for every build. For example if SNAPX.Y still running and SNAPX.Y+1 is released, will be guaranteed that the previous snap will be completed before running the updated one.
  • Individual jobs of the pipeline will be able to be triggered without triggering any other.

On the other hand, these will be the disadvantages:

  • Wait until a previous build completes its execution before running the new build. This can be avoided by just stopping the running jobs if really necessary.

If you have any other suggestion, please let me know.

Add option to installer to set verbosity

Current the installer jobs default to passing -d to katello-configure, which is useful when determining if something went wrong for failed jobs. However, -d causes the installer to not save the answer-file which can also be useful for other tasks.

We should modify our existing jobs to allow the user to determine whether to pass -d to the installer or not (see this).

Fix Instance Names for Upgrade Automation

Fix current 'SATELLITE_INSTANCE' and 'CAPSULE_INSTANCE' names, so that Automation will delete previous run instance automatically, instead of human/user intervention in Openstack.

Clean up workspace before cloning pylarion for betelgeuse

Running the satellite6-betelgeuse-test-run-rhel6 fails due to the fact that cloning pylarion fails since the checkout already exists:

+ git clone https://EDITED
fatal: destination path 'pylarion' already exists and is not an empty directory.
Build step 'Virtualenv Builder' marked build as failure

Perhaps the satellite6-betelgeuse.sh script should perform a 'clean up' and remove things before proceeding?

Provide ability to toggle/set gpgcheck in installer

Presently, we specifically disable gpg checking of RPMs due to the fact that we're generally running against test composes. However, we may want to check against production builds from time to time.

Thus, let's have an option in the installer to toggle gpgcheck=1. Our default should remain "0", however, for the moment.

Update automation pipeline

Robottelo tests are now marked using tiers. Because this, the automation pipeline should be updated to be match the new organization.

The comparison between the old and new pipeline can be seen on the image below.

automation pipeline 2

Handle different server software versions

Satellite 6.0.7, 6.0.8 and 6.1.0 have been released, more releases will land in the future, and nightly builds are available too. Each version acts a little bit differently, and NailGun currently makes use of that versioning information when determining how to talk to the server. In addition, other parts of Satellite QE's software suite may make use of versioning information in the future. Jenkins should be updated to somehow make use of this versioning information. At the very least, version numbers should be passed to NailGun.

Automate existing triggers by using URLTrigger

https://wiki.jenkins-ci.org/display/JENKINS/URLTrigger+Plugin

By cron like polling at specific url we can trigger build automatically.
For upstream we can poll at ->
https://fedorapeople.org/groups/katello/releases/yum/nightly/katello/RHEL/7/x86_64/repodata/repomd.xml

For downstream we can poll at ->
http://satellite6.server.com/devel/candidate-trees/Satellite/latest-Satellite-6.1-RHEL-7/COMPOSE_ID

But in downstream there is a formal QE hand-off process, however, automation can run even on (yet) not handed compose

satellite6-standalone-automation failed

I triggered the above job with TEST_TYPE = smoke-api and it failed with the following error. It is not able to escape 'not stubbed' properly.

++ which py.test
+ PYTEST='/home/jenkins/shiningpanda/jobs/ad0ef6b5/virtualenvs/d41d8cd9/bin/py.test -v --junit-xml=foreman-results.xml -m '\''not stubbed'\'''
+ '[' -n '' ']'
+ case "${TEST_TYPE}" in
++ echo smoke-api
++ cut -d- -f2
+ TEST_TYPE=api
+ /home/jenkins/shiningpanda/jobs/ad0ef6b5/virtualenvs/d41d8cd9/bin/py.test -v --junit-xml=foreman-results.xml -m ''\''not' 'stubbed'\''' tests/foreman/smoke/test_api_smoke.py
============================= test session starts ==============================
platform linux2 -- Python 2.7.10, pytest-2.8.7, py-1.4.31, pluggy-0.3.1 -- /home/jenkins/shiningpanda/jobs/ad0ef6b5/virtualenvs/d41d8cd9/bin/python2.7
cachedir: .cache
rootdir: /home/jenkins/workspace/satellite6-standalone-automation, inifile: 
plugins: xdist-1.14
collecting ... 
 generated xml file: /home/jenkins/workspace/satellite6-standalone-automation/foreman-results.xml 
========================= no tests ran in 0.00 seconds =========================
ERROR: file not found: stubbed'
Build step 'Virtualenv Builder' marked build as failure
Archiving artifacts
Recording test results
Started calculate disk usage of build
Finished Calculation of disk usage of build in 0 seconds
Started calculate disk usage of workspace
Finished Calculation of disk usage of workspace in 0 seconds
Finished: FAILURE

New jenkins jobs to install from cdn

  • create two new jenkins jobs one each for rhel67 and 71.
  • Each job needs its own new vm
  • Both jobs should install from cdn
  • these vms will keep on running latest sat

Capsules: Integrate capsule installation/configuration into jenkins

Dependencies on
SatelliteQE/automation-tools#185
SatelliteQE/automation-tools#186

Integrate capsule installation/configuration from automation-tools into something consumable within jenkins.

REQUIREMENTS

  • Be able to install capsule from jenkins
  • Be able to install multiple capsules at one time.
  • Be able to accept parameters used for installation. For example, option to enable DHCP at capsule install time, or not (if we wanted to test PXEless and/or do not have a DHCP server)

Possible parameters for installation (not sure the best way to impl this in jenkins)

  • parent (satellite) hostname
  • child (target capsule) hostname
  • content view name for registration? not sure.
  • partition_disk toggle
  • ...?

Satellite 6 test update job

Create a job that will install from CDN and, after that, update to a more recent compose build.

Components to consider when upgrading:

Core satellite
Capsule
katello-disconnected
katello-agent
puppet agent
Others....?
Assertions/scenarios that need to be exercised:

All existing content within a populated instance is still available/exposed in upgraded instance
All existing content within upgraded components connected to an upgraded, populated instance, is stll available/exposed
Baseline functionality works in an upgraded instance
What happens when an older component (see above) tries to communicate with an upgraded instance?
Check for availability of any new communication ports that might be opened up in upgraded instance.
Ability to rollback if an upgrade fails, and/or provide a --dry-run option that run through the motions but not actually make system changes.
Connection of external components (see above) acts sanely -- does not cause instability, perhaps given deprecation warnings, auto-upgrades? TBD.
Approach

Populate an older instance and all external components - perhaps use automation to populate known, constant values (rather than random data). We may want to save an image of this "dirty" system for subsequent upgrade tests.
Upgrade core server
Assure content is still composed on upgraded server
Attempt to populate new data onto upgraded instance
Attempt to exercise all new functionality that is a delta between old and new instance
Attempt to connect upgraded components to new server and interact with them
Attempt to connect non-upgraded components to new server and interact with them.
Attempt to connect newly installed components of the latest version to new server and interact with them.
Populate fresh instance of newest release and populate it. Compare schema/data with that from upgraded instance.

foreman debug creation failed

[PostBuildScript] - Execution post build scripts.
[PostBuildScript] Build is not failure : do not execute script
Archiving artifacts
ERROR: No artifacts found that match the file pattern "foreman-debug.tar.xz". Configuration error?
ERROR: ‘foreman-debug.tar.xz’ doesn’t match anything
Build step 'Archive the artifacts' changed build result to FAILURE
Started calculate disk usage of build
Finished Calculation of disk usage of build in 0 seconds
Started calculate disk usage of workspace
Finished Calculation of disk usage of workspace in 0 seconds
Finished: FAILURE

Provide details in a .txt file regarding product-specific packages installed.

Purpose
Historically we have had build dates in the compose, but that isn't/may not always be the case. Thus, it is (now even more) important to be able to readily provide dev details as to what specific components have been tested.

During an automated install, we should go ahead and put those details aside in text file of some sort for easier retrieval and a record of what is presently there (in case dev asks to install updated rpms for debugging, etc.)

Proposed Implementation

  • We have manually been using a command like

for i in rpm -qa | grep -iE "^katello|^pulp|^candlepin|^foreman|^headpin|^thumbslug|^elasticsearch|ldap|signo|ruby193-rubygem-runcible" | sort; do echo "* $i"; done This may need to be updated, however.

  • There is also the possibility that foreman-debug can provide said details as well.

Mockup

[root@hostname ~]# cat compose-details.txt

Install date: $date
Package details for "my.hostname.example.com"

abc-1.0.0.x86_64.rpm
xyz-1.0.0.x86_64.rpm
pdq-1.0.0.x86_64.rpm
...

Other Ideas

Maybe we could also include other important details in this output log, just some thoughts.

  • Selinux status?
  • Firewall status?
  • Could we reference the CI job number/URL? dunno.

Not able to generate/update jobs

Getting the following every time that generate_jobs.sh or update_job.sh is run:

Traceback (most recent call last):
  File "/home/elyezer/.virtualenvs/robottelo-ci/bin/jenkins-jobs", line 11, in <module>
    sys.exit(main())
  File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/cmd.py", line 172, in main
    execute(options, config)
  File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/cmd.py", line 321, in execute
    output=options.output_dir)
  File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/builder.py", line 288, in update_job
    self.parser.generateXML()
  File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/parser.py", line 311, in generateXML
    self.xml_jobs.append(self.getXMLForJob(job))
  File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/parser.py", line 321, in getXMLForJob
    self.gen_xml(xml, data)
  File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/parser.py", line 328, in gen_xml
    module.gen_xml(self, xml, data)
  File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/modules/triggers.py", line 1121, in gen_xml
    self.registry.dispatch('trigger', parser, trig_e, trigger)
  File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/registry.py", line 200, in dispatch
    parser, xml_parent, b, component_data)
  File "/home/elyezer/.virtualenvs/robottelo-ci/lib/python2.7/site-packages/jenkins_jobs/registry.py", line 204, in dispatch
    format(name, component_type))
jenkins_jobs.errors.JenkinsJobsException: Unknown entry point or macro 'gitlab' for component type: 'trigger'.

create a new jenkins job to run just rhai tests

tests/foreman/rhai

Requirements:

  1. This job should accept a sat instance and run rhai tests.
  2. This job needs to be triggered after the sat UI tests are completed.
  3. any failure in rhai should be automatically emailed to rhai team.

Manage Jenkins plugins

Our Jenkins installation makes use of several plugins. robottelo-ci should be able to manage the plugins that Jenkins has installed. I envision robottelo-ci being able to accomplish the following tasks:

  • Given a freshly installed Jenkins instance, make that Jenkins instance install a correct set of plugins.
  • Given an existing Jenkins instance, make that Jenkins instance install any missing plugins and remove any extra plugins.
  • Given an existing Jenkins instance, make that Jenkinst instance upgrade or downgrade its installed plugins.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.