Git Product home page Git Product logo

pyresttest's People

Contributors

danielatdattrixdotcom avatar espebra avatar gitter-badger avatar jewzaam avatar kesmy avatar kontrollanten avatar lerrua avatar morrisjobke avatar olafvdspek avatar spradeepv avatar svanoort avatar tkeffer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyresttest's Issues

No import examples on documentation

Im looking for import examples to reuse my configurations. I tried many ways I thought it could be done but none of that worked. It would be nice to see some examples on the URL as a mail level syntax element too.

As the wiki say:

There are 5 top level test syntax elements:

    url: a simple test, fetches given url via GET request and checks for good response code

Support setting multiple request headers with same name

Currently headers on requests are parsed and stored into a dictionary of key, value pairs.

It would be nice to convert that to a list of (key,value) tuples to support setting duplicate headers (example cookie values).

This requires changing the tests & test_tests classes, changing how headers are parsed, stored and passed to pycurl. Which sounds like a lot but is, in fact, super-duper easy.

if head: #Convert headers dictionary to list of header entries, tested and working
            headers = [str(headername)+':'+str(headervalue) for headername, headervalue in head.items()]
# BECOMES
if head: #Convert headers dictionary to list of header entries, tested and working
            headers = [str(headername)+':'+str(headervalue) for (headername, headervalue) in head]

Note that for back-compatibility, we need to support still receiving a yaml dictionary of header values for test config.

Refactor test lifecycles to support parallel operations

As a pyresttest user I'd like to be able to parallelize the test execution (parallel HTTP calls).

TL;DR Summary of Analysis

  1. Worry about parallelizing network I/O first, then the rest, since it's 95% of time in most cases.
    • Remaining overheads is dominated by JSON parsing on extract/validate and curl object creation (caching and curl.reset() will solve that)
  2. Resttest framework methods need to be refactored to isolate parts
    • (Re)Configure curl: Function to (re)generate Curl objects for given test (reusing existing if possible)
    • Execute curl: curl.perform -- multiplexed by CurlMulti or wrapper on same - gotcha: reading body/header.
    • Analyze curl: gather stats, return appropriate result type
    • Reduce results: Summarize benchmarks, add to pass/fail summaries, etc
    • Control flow: Break from loop if needed.

Need to start working out code for the above.

Precursor: using curl reset when reusing curl handles.

Look at using CurlMulti, see example:
https://github.com/Lispython/pycurl/blob/master/examples/retriever-multi.py
See also: https://github.com/tornadoweb/tornado/blob/master/tornado/curl_httpclient.py

PyCurl Multi Docs: http://pycurl.sourceforge.net/doc/curlmultiobject.html#curlmultiobject
LibCurl: http://curl.haxx.se/libcurl/c/libcurl-multi.html

Using multiprocessing pools for process-parallel execution: http://stackoverflow.com/questions/3842237/parallel-processing-in-python.

Concurrency should be managed at a testset level. Why below.

  • Some testsets can be run parallel (fetches/gets), some not (creating/updating/deleting)
  • Multiple testsets CANNOT be run in parallel, otherwise unclear behavior results (testsets may depend on others to do setup/teardown).
  • If we allow individual tests to define parallel/nonparallel behavior, order of execution & handling of testset becomes complex.
  • Question: should testsets default to parallel or serial?
    • Gotcha: context-modifying tests cannot be safely concurrent
    • Decision point: start just accepting user's setting, defaulting to serial.
    • Decision part 2: slowly start allowing concurrent default when safety checks for concurrency pass. Narrow down what cannot auto-parallelize over time.

Config syntax:

---
- config:
  concurrency: all  # Maximum, one thread per test run
  concurrency: 1  # single thread, always serial
  concurrency: none  # another way to ensure serial
  concurrency: -1  # yes, this is serial too, as is anything <= 1
  concurrency: 4  # Up to 4 requests at once
  concurrency: 16  # Up to 16 requests at once, if that many tests exist

Implementation:
All initial parsing runs, then we decided how to execute (serial or concurrent).
For concurrent, I see 4 levels of concurrency, with increasing concurrent resource use and performance, but increasing complexity:

  1. Serial test setup/analysis, parallel network requests
    • Generate tests, then execute batches in parallel with CurlMulti and analyze results serially before next batch.
    • Execution is done using map(...) calls on functions, very clean.
    • Pros:
      • Fairly easy to do (?) with CurlMulti
      • Provides fixed batch execution methods
      • Avoids Process management
      • No worries about synchronization issues with tests themselves
    • Con:
  2. Parallel execution, process does setup/execute/analyze and returns result
    • Each process does a full test/benchmark execution (setup, network call, return)
    • Basically do results = pool.map(tests, run_test)
    • Multiprocessing makes this easy, minimal code changes vs. current
    • Pros:
      • Easy, uses existing methods most effectively
      • Gives a more consistent concurrent load for load testing
      • Fully uses multiple cores
    • Con:
      • Synchronization issues with generators, etc
      • Error handling & logging become a bit broken
      • Requires ability to gather all results at once before processing
      • Process management and similar headaches.
      • May not use networking as efficiently as CurlMulti does
      • Bottlenecked by serial processing to some extent
  3. Controller process, in parallel with a concurrent network I/O process
    • Controller process generates tests and feeds them to a concurrent network request process, which continuously executes them and then returns results async, which get analyzed by main thread.
    • Network I/O uses CurlMulti, single thread does processing
    • Pros:
      • Gives a more consistent concurrent load for load testing
      • Network side fully decoupled from test overheads
    • Con:
      • More complex than above two (combines them)
  4. Controller process, parallel create/analyze processes, parallel network I/O process
    • One controller thread for orchestration which mostly does setup/cleanup
    • Tests are generated and analyzed by process pool
    • A network I/O execution pool receives curl objects to execute and runs callbacks when they complete so they can be processed.
    • Pros:
      • Very efficient
      • Maximum resource use
      • Allows tuning network and CPU bound concurrency separately
      • Very amenable to networked execution, just talk to controller
    • Cons:
      • Very complex
      • Needs to be able to continuously feed in work to analysis process pool (orchestrated by controller)
      • Needs

Analysis:

  • Setup/teardown for benchmark was about 5% overhead.
  • Network is most of the time, even for purely local calls
  • TODO: Find out what the overhead is for a normal test, that'll say how much we need to parallelize this.

Test overhead:

  • I did: time python profile_basic_test.py
  • For github_api_test, it took: 1.189s realtime, cProfile says 1.041s was curl.perform() (about 10% overhead, including test I/O and parsing)
    • On a second run, with cumtime: real time 0.991s, cProfile time: 0.855s, curl.perform time: 0.837s, read/parse YAML for test file: 0.014s
    • Test overhead: 0.004s (4 ms), 0.5% overhead, 2ms overhead per test
    • Suspect that JSON parsing is primary overhead
  • For content-handler test with testapp (fully local call, minimal work), doing cumtime:
    • 'time' says realtime 0.417s, cProfile says 0.282s total runtime, 0.234s was curl.perform (18% overhead).
    • Overhead had 0.040s reading/parsing test file from YAML to object
    • Total overhead of testing: 0.008s (8 ms). Not bad for local calls with templating and file read, about 3-4% overhead for test running.
    • Since this is actually 19 tests(!), overhead is around 0.5ms
  • Conclusion: barring schema/YAML loading, overhead is <5% for normal tests

Decision Point:

  • Start with refactoring for 1, but with eye to doing 3/4 in the end and building something sane for that (isolating logging)
  • Look at actor model as a possible implementation approach?
  • Look at how functional languages do this (our implementation should be guided by this) for sanity's sake.
    • Gear toward map/reduce/filter implementations and limit mutation

Issue with Basic Auth or Package Installation

Hi,
I installed your package as per your readme instructions and after running through the quickstart I started to play with my own API - a call to a Basic Auth secured function using the following YAML:


  • config:
    • testset: "API session function Test"
  • test:
    • group: "Session"
    • name: "Get /session/"
    • url: "/api/session/"
    • auth_username: "[email protected]"
    • auth_password: "pass"
    • expected_status: [200]

Running with the command 'resttest.py http://10.1.1.84:80 apitest2.yaml --log debug --interactive true --print-bodies true' gives the following:

Get /session/

REQUEST:
GET http://10.1.1.84:80/api/session/
Press ENTER when ready:
DEBUG:Initial Test Result, based on expected response code: False
RESPONSE:
Unauthorized Access
DEBUG:{"test": {"expected_status": [200], "_headers": {}, "group": "Session", "name": "Get /session/", "_url": "http://10.1.1.84:80/api/session/", "templated": {}}, "failures": [{"failure_type": "Invalid HTTP Response Code", "message": "Invalid HTTP response code: response code 401 not in expected codes [[200]]", "validator": null, "details": null}], "response_code": 401, "body": "Unauthorized Access", "passed": false}
ERROR:Test Failed: Get /session/ URL=http://10.1.1.84:80/api/session/ Group=Session HTTP Status Code: 401

ERROR:Test Failure, failure type: Invalid HTTP Response Code, Reason: Invalid HTTP response code: response code 401 not in expected codes [[200]]

Test Group Session FAILED: 0/1 Tests Passed!

No sign of the auth info in the data. So I had a look at the code, got confused as it seemed fine then ran the resttest.py script directly in python (after adding the line 'print "AUTH: %s:%s" % (mytest.auth_username, mytest.auth_password)' at line 268 in /pyrestest/resttest.py) to get the following:

Get /session/

REQUEST:
GET http://10.1.1.84:80/api/session/
AUTH: [email protected]:pass
Press ENTER when ready:
DEBUG:Initial Test Result, based on expected response code: True
DEBUG:no validators found
RESPONSE:
{
"duration": 43200,
"friendly_name": "A Wade",
"token": "eyJhbGciOiJIUzI1NiIsImV4cCI6MTQyMzg2OTk4NCwiaWF0IjoxNDIzODI2Nzg0fQ.eyJlbWFpbCI6ImFyY2hpZS53YWRlQGFyaWEtbmV0d29ya3MuY29tIn0.NaoVU3gM-JDEy162R1TcBpZgzkpgmsGwqryJTSpm1rI"
}
DEBUG:{"body": "{\n "duration": 43200, \n "friendly_name": "A Wade", \n "token": "eyJhbGciOiJIUzI1NiIsImV4cCI6MTQyMzg2OTk4NCwiaWF0IjoxNDIzODI2Nzg0fQ.eyJlbWFpbCI6ImFyY2hpZS53YWRlQGFyaWEtbmV0d29ya3MuY29tIn0.NaoVU3gM-JDEy162R1TcBpZgzkpgmsGwqryJTSpm1rI"\n}", "response_headers": {"content-length": "235", "expires": "Fri, 13 Feb 2015 11:26:23 GMT", "server": "nginx/1.4.6 (Ubuntu)", "connection": "close", "pragma": "no-cache", "cache-control": "no-cache", "date": "Fri, 13 Feb 2015 11:26:24 GMT", "content-type": "application/json"}, "response_code": 200, "passed": true, "test": {"expected_status": [200], "_headers": {}, "group": "Session", "name": "Get /session/", "_url": "http://10.1.1.84:80/api/session/", "templated": {}, "auth_password": "pass", "auth_username": "[email protected]"}, "failures": []}

INFO:Test Succeeded: Get /session/ URL=http://10.1.1.84:80/api/session/ Group=Session

Test Group Session SUCCEEDED: 1/1 Tests Passed!

Which is what I expected originally. So why would the Auth work fine on one run through but not the other? I suspect the install is pointing to an older version of the script somehow, but I don't know enough to prove that.

Also, running the test:

  • test:
    • group: "Session"
    • name: "Put /session/"
    • url: "/api/session/"
    • method: "PUT"
    • auth_username: "[email protected]"
    • auth_password: "pass"
    • expected_status: [200]

Leads to :
Traceback (most recent call last):
File "resttest.py", line 720, in
command_line_run(sys.argv[1:])
File "resttest.py", line 716, in command_line_run
main(args)
File "resttest.py", line 685, in main
failures = run_testsets(tests)
File "resttest.py", line 547, in run_testsets
result = run_test(test, test_config = myconfig, context=context)
File "resttest.py", line 250, in run_test
curl = templated_test.configure_curl(timeout=test_config.timeout, context=my_context)
File "/home/archie/Projects/APITesting/pyresttest/pyresttest/tests.py", line 264, in configure_curl
curl.setopt(curl.READFUNCTION, StringIO(bod).read)
TypeError: must be string or buffer, not None

presumably because of (intentional but perhaps misguided) lack of a body in the PUT call.

I really like the package though, it will be very useful in the future, so thank you very much for building it and putting it up here.

Cheers,
Wade

WARNING:Failed to load jsonschema validator,

WARNING:Failed to load jsonschema validator, make sure the jsonschema module is installed if you wish to use schema validators.
Test Group Default SUCCEEDED: 1/1 Tests Passed!

If i am not using json schema validation, how do i remove this warning?

Issue with using variable extracted form header

Hello --

We've been trying to use pyresttest for testing a Spring web application, where for several operations a CSRF token needs to be used. pyresttest seems to be perfect for the task, but we couldn't get the header extraction work as it should for some reason...


---
- config:
   - testset: "Example test set"

- test:
   - name: "Initial call to get the CSRF"
   - url: "/get_token"
   - extract_binds:
     - token: { header: "x-csrf-token"}

- test:
   - name: "Test call with token"
   - url: { template : "/example_endpoint/$token/" }

I'm testing with pyresttest http://localhost:8081 test.yaml --interactive=true --log=debug to see the calls and I see that it gives the following error message:

ERROR:Test Failed: Test with token URL=http://localhost:8081/example_endpoint/$token/ Group=Default HTTP Status Code: 404

...so the templating is surely off somewhere a bit.

Could someone please explain why this happens and how I could fix the above example to make it work? I would greatly appreciate any help.

Kind regards,
Zoltán

Can headers contain a template?

I am trying to extract 'company_id' from one test and use it as header in the second test. is it possible?tx

  • config:
    • testset: "test"
  • test:
    • name: "create company"
    • url: "/companies"
    • method: 'POST'
    • expected_status: [500]
    • extract_binds:
      • 'company_id': {'jsonpath_mini': 'id'}
    • body: '{
      "name": "fake company"
      }'
  • test:
    • name: "create application"
    • url: "/applications"
    • method: 'POST'
    • headers: {'Content-Type': 'application/json','companyId': '$company_id'}
    • body: '{
      "name": "My fifth app"
      }'

Exists Extractors failing when value is null

The following test fails when the value keys are zero on the result

- config:
    - testset: "Finance"
    - generators:  
        - 'ACCESSTOKEN': {type: 'env_variable', variable_name: ACCESSTOKEN}
- test:
    - group: "Revenue"
    - name: "Get Revenue"
    - url: "/rpc/v1/finance.get-revenue.json"
    - method: "POST"
    - expected_status: [200]
    - headers: {template: {"Authorization": "$ACCESSTOKEN"}}
    - validators: 
        - extract_test: {jsonpath_mini: "0.id",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.dueDate",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.account",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.account_name",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.person",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.client_name",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.description",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.portion",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.repeatTimes",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.value",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.creationDate",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.status",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.expired",  test: "exists"}

The result:

[
    {
        "id":1,
        "dueDate":"2014-12-24",
        "account":4,
        "account_name":"Account 1",
        "person":1,
        "client_name":"Client1",
        "description":"Revenue description Test One",
        "portion":0,
        "repeatTimes":0,
        "value":1000,
        "creationDate":"2014-12-24",
        "status":"-1",
        "expired":"0"
    }
]

Portion and repeatTimes fails.

Could not get the cookies from http response

Hi,
We've been trying to use pyresttest for testing a web application, but we couldn't get the cookie through the header extracts

- headers: {Content-Type: application/json}
- extract_binds:
          - 'Set-Cookie': { header: 'Set-cookie'}

but we could not get any Results

Could someone please explain why this happens and how I could fix the above example to make it work?

Refactoring pyresttest internals

As a pyresttest python API user, it's crazy that there are so many inconsistencies in the internal naming, let's fix these:

  • parse methods can be 'build__' or 'make__' or 'parse__' -- let's make them all 'parse__'
  • executor methods should all be 'run_*' but are NOT
  • failures are returned as ValidationFailure objects, they should be TestFailure objects, maybe move out of validation library and into common library or similar?

Anyone help me to write yaml for following GET json response

YAML File:

  • config:
    • testset: "Basic tests"
  • test: # create entity
    • name: "Basic get"
    • url: "/nitro/v1/config/af_config_info"
    • auth_username: "testuser"
    • auth_password: "testuser"
    • validators:
      • extract_test: {jsonpath_mini: "0.af_config_info", test: "exists"}

ERROR:Test Failed: Basic get URL=http://localhost:80/rest/v1/config/af_config_info Group=Default HTTP Status Code: 200
ERROR:Test Failure, failure type: Validator Failed, Reason: Extract and test validator failed on test: exists(None)
ERROR:Validator/Error details:Extractor: Extractor Type: jsonpath_mini, Query: "0.af_config_info", Templated?: False
ERROR:Test Failure, failure type: Validator Failed, Reason: Extract and test validator failed on test: exists(None)

For the following JSON RESPONSE BODY:

{ "errorcode": 0, "message": "Done", "additionalInfo": { "cert_present": "false" }, "af_config_info": [ { "propkey": "NS_INSIGHT_LIC", "propvalue": "1" }, { "propkey": "CB_DEPLOYMENT", "propvalue": "FALSE" }, { "propkey": "NS_DEPLOYMENT", "propvalue": "TRUE" }, { "propkey": "CR_ENABLED", "propvalue": "0" }, { "propkey": "SLA_ENABLED", "propvalue": "1" }, { "propkey": "URL_COLLECTION_ENABLED", "propvalue": "1" }, { "propkey": "HTTP_DOMAIN_ENABLED", "propvalue": "1" }, { "propkey": "USER_AGENT_ENABLED", "propvalue": "1" }, { "propkey": "HTTP_REQ_METHOD_ENABLED", "propvalue": "1" }, { "propkey": "HTTP_RESP_STATUS_ENABLED", "propvalue": "1" }, { "propkey": "OPERATING_SYSTEM_ENABLED", "propvalue": "1" }, { "propkey": "REPORT_TIMEZONE", "propvalue": "local" }, { "propkey": "SERVER_UTC_TIME", "propvalue": "1438947611" }, { "propkey": "MEDIA_TYPE_ENABLED", "propvalue": "1" }, { "propkey": "CONTENT_TYPE_ENABLED", "propvalue": "1" } ] }

Add extension import mechanisms

Extensions should allow adding generators, validators, etc

Parts:

  • Code implementation
  • Test basic imports
  • Document basics

(This is currently underway)

Return multiple schema validation errors

Currently the schema validation module returns an exception when schema validation fails, rather than printing all errors.

As a json schema validation user, let's fix that!

Add Caching To Tests/Benchmarks

Performance enhancement
Add a caching layer to:

  • validators.MiniJsonExtractor (cache the parsed JSON object, key is the hash of unparsed content?)
  • new JsonSchema validator (cache the parsed schema object, key is the schema filename or content hash?)
  • ContentHandler content object (cache the content, using a key somehow linked to the file/content)

This will enable appropriate lazy-load of files and content, but still allow for dynamism.

For options, take a look at builtin WeakValueDictionary

Document test_resttest execution

Looks like it's using some simple service but details on how to run the test are missing. If it's something that has to be setup, would be nice to know how. Even better if it could be independent of any setup by the user. Maybe deploy the target service somewhere (OpenShift Online?) or use a stable public api?

Delighter: convenient, easy 'basic benchmark'

The idea here is not to ask users for a complex set of arguments when running benchmarks, just generate a tidy report for the most common use case.

A la palb.

This offers an easy starting point for users looking for a 'basic benchmark' and we already have all the hooks to collect the stats used here, anyway. Well, aside from 50%/75%/99% timings but that isn't too hard. Plus they get all the magic with generators/templating/special configuration/etc that we offer for pyresttest.

Open question: how to modify syntax for this benchmark? Define a simple 'basic benchmark' implementing this, to avoid argument collisions with benchmark?

Average Document Length: 23067 bytes

Concurrency Level:    4
Time taken for tests: 6.469 seconds
Complete requests:    100
Failed requests:      0
Total transferred:    2306704 bytes
Requests per second:  15.46 [#/sec] (mean)
Time per request:     250.810 [ms] (mean)
Time per request:     62.702 [ms] (mean, across all concurrent requests)
Transfer rate:        348.22 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       43    46   3.5     45      73
Processing:    56   205  97.2    188     702
Waiting:       90   100  12.6     94     138
Total:        106   251  97.1    234     750

Percentage of the requests served within a certain time (ms)
  50%    239
  66%    257
  75%    285
  80%    289
  90%    313
  95%    382
  98%    613
  99%    717
 100%    768 (longest request)

Set Up Jenkins Automation for Project

At this point, it's important to have automation for testing, and especially for platform compatibility before releases. The best way to do this is via Jenkins, but there are some things that need to happen to get there.

See Stack Overflow post on python unittests in Jenkins.

This is a necessary precursor for automated testing of scripts. It p

Done:

  1. Jenkins host to use (exists and configured in AWS, currently idle)
  2. Permissioned svanoort-jenkins gitgub user for Jenkins to use with this project

TODO:

  1. Code change: refactor run_tests.sh or provide alternative(s) to enable running tests from Jenkins
  2. Jenkins configuration: set it up to run tests for pyresttest, probably via nose, and return on failure
  3. Set up github pull request builder for this repo and validated merge plugin
  4. Set a more exhaustive test environment for different environments:
    • CentOS 6 docker builder & unittests (python 2.6)
    • CentOS 7 docker builder & unittests (python 2.7?)
    • Ubuntu unittesting
  5. Installation testing for same (rpm build too?)

Future (blocked by other things):

  • Python 3 tests

Testing posting form data?

Hi. I want to be able to test posting form data fields.

For example in postman you would click form-data and fill in the keys/values.

How can I do it with pyresttest? I can't seem to get it to work.

Thanks.

Reuse of previous GET response fields in subsequent GET requests in YAML

Hi,

Would like to reuse of first GET response fileds into next GET request REST url params.

For example:

  • test: # create entity
    • group: "Quickstart"
    • name: "Basic get"
    • url : "/v1/config/device"

Assume it returns list of devices with <Id, name > in response body as
"device": [{ id : AUTO_GENRATED_ID1, name: abc} , {id:AUTOGENERATED_ID2 , name : xyz} ].

For the next test I like to GET info about ID: AUTOGENERATED_ID1 as REST URL request

  • url : "/v1/config/device/AUTOGENRATED_ID1"

How to write yaml to acheive it?

Also for dynamic REST URL params how can we define YAML?

ex: url/current_stock_price?Date:23062015

for the date param i want pass user defined date without modifying YAML file. Like in shell script
read argument or execute "date" command and pass it as argument to shell script. can we do similar to this in this framework?

Thanks in advance.

Get last entry of array with json_path_mini

I kow it relates with (this)[https://github.com//issues/39] generic issue but I need a specific answer.

- config:
    - testset: "Inventory Group"
    - generators:
       - 'ACCESSTOKEN': {type: 'env_variable', variable_name: ACCESSTOKEN}
- test:
    - group: "Group"
    - name: "Post Group"
    - url: "/rpc/v1/inventory.post-group.json"
    - method: "POST"
    - expected_status: [200]
    - body: "name=NewGroup"
    - headers: {template: {"Authorization": "$ACCESSTOKEN"}}
    - validators:
        - compare: {jsonpath_mini: "status",     comparator: "eq",     expected: 1}
- test:
    - group: "Group"
    - name: "Get group"
    - url: "/rpc/v1/inventory.get-group.json"
    - method: "POST"
    - expected_status: [200]
    - headers: {template: {"Authorization": "$ACCESSTOKEN"}}
    - validators:
        - extract_test: {jsonpath_mini: "0.id",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.name",  test: "exists"}
   - extract_binds:
        - 'CURRENTID': {'jsonpath_mini': 'LAST.id'}
- test:
    - group: "Group"
    - name: "Update group"
    - url: "/rpc/v1/inventory.put-group.json"
    - method: "POST"
    - body: {template: "name=NewName&id=$CURRENTID"}
    - expected_status: [200]
    - headers: {template: {"Authorization": "$ACCESSTOKEN"}}
    - validators:
        - compare: {jsonpath_mini: "status",     comparator: "eq",     expected: 1}

There's a way to get the last entry of the array that the get-group method returns? Something to replace the LAST key on the tests.

Stable Tag

I need a stable version to use on my project. Till now I was using master but it eventually breaks and my Jenkins go crazy.

Silent extraction failures

It looks like based on #63 there is a potential for extractor processing to fail without printing an error or exception in the process.

We need to add a logging statement and test for the same, to ensure this is captured.

See tests.py, lines 180-184 (note lack of logging, just print statement)

There's a way to set a jsonpath_mini extracted data as a variable?

- config:
    - testset: "Inventory Group"
    - generators:
       - 'ACCESSTOKEN': {type: 'env_variable', variable_name: ACCESSTOKEN}
- test:
    - group: "Group"
    - name: "Post Group"
    - url: "/rpc/v1/inventory.post-group.json"
    - method: "POST"
    - expected_status: [200]
    - body: "name=NewGroup"
    - headers: {template: {"Authorization": "$ACCESSTOKEN"}}
    - validators:
        - compare: {jsonpath_mini: "status",     comparator: "eq",     expected: 1}
- test:
    - group: "Group"
    - name: "Get group"
    - url: "/rpc/v1/inventory.get-group.json"
    - method: "POST"
    - expected_status: [200]
    - headers: {template: {"Authorization": "$ACCESSTOKEN"}}
    - validators:
        - extract_test: {jsonpath_mini: "0.id",  test: "exists"}
        - extract_test: {jsonpath_mini: "0.name",  test: "exists"}
    - variable_binds: {CURRENTID: "0.id"}
- test:
    - group: "Group"
    - name: "Update group"
    - url: "/rpc/v1/inventory.put-group.json"
    - method: "POST"
    - body: {template: "name=NewName&id=$CURRENTID"}
    - expected_status: [200]
    - headers: {template: {"Authorization": "$ACCESSTOKEN"}}
    - validators:
        - compare: {jsonpath_mini: "status",     comparator: "eq",     expected: 1}

Given this exemple, there's a way of getting the 0.id and set is as CURRENTID ?

Suggestion: Allow verifying return of API against jsonschema

I am looking at using this for building functional tests of a JSON-based API. The one feature I would like to see is the option of verifying json output of the API against a json-schema.

Looking at the documentation, perhaps this could be implemented as an extractor. I may take a look at it when I get using pyresttest.

Proposals for pyresttest v2

I am currently considering changes for the next version of PyRestTest (version 2). In order to extend the framework, I would like to make some design changes, which may break back compatibility (generally in small ways).

Because of this, I'm throwing this out for community/user comment. Please comment on this issue, to weigh in with your thoughts and if one of these would cause problems for your uses.

Proposed changes:

  1. Replace the Command line arguments / TestConfig / Test / Benchmark configuration with a nested set of configuration scope objects. This may take place over a couple releases, but should enable greater control of test configuration, and simplify adding new options. Con: may break some pure-Python users.
  2. Replace explicit templating with more powerful implict templating, using a library. This makes the syntax cleaner and more powerful, and greatly simplifies logic for generating content. As an intermediate step, I may continue to support existing template syntax where currently allowed. Con: you will need to escape special characters. May also be slightly slower, depending. Template syntax will probably be Jinja2.
  3. Remove implict type conversions in YAML test parser. Yes, it's beautiful that it tries to implicitly fix things. However, this makes the parsing code far more complex than needed, and rather hard to add new features and options. Con: if you're relying on this feature, tests break.
  4. Reach goal: execution phases. The idea here is to refactor the run_test, run_benchmark, and run_testsets methods into discrete steps. This allows for much easier enhancements, and may constitute a new extension point. End goal is to enable load testing, parallel execution, and much easier extensions of these steps.

The combination of the above steps should make the framework easier to use and more powerful.

Notes:

Templating performance comparison:
http://stackoverflow.com/questions/1324238/what-is-the-fastest-template-system-for-python

Add FailureReason objects

User Story:
As a PyRestTest user, I'd like to be able to see why a test failed without diving into logging levels, in a human-readable way.

Add FailureReason objects to TestResult & BenchmarkResult objects & their execution methods

  • FailureReason object includes:
    • str(self) method that gives a human readable output to use (instead of logging methods)
    • Validator info, to describe what's being tested
    • Result of validator run, to describe how it broke

Add flexible test/benchmark executors

As a user, I would like to be able to execute tests in parallel, or with different reporting, logging, and execution pipelines (say, optimized execution for load testing).

I might also want to implement a pre-compiler/pre-validator for benchmark/test sets that limits templating and curl reconfiguration where possible. This is more efficient than trying to include lots of if/then in running methods.

Code requirements:

  • refactor benchmark/test runner methods to be more modular
  • create object/structure describing the execution pipeline more flexibly

Test Result summary

@svanoort : we are looking for complete automation suite with multiple yaml files and with testcase ID's
and can we get one summary for all the YAML files at one place.

Setting up Authorization for all tests in a file

Hello,

Just getting started, but thanks for what looks like a great tool! :)

When writing multiple tests against the same api that requires authorization. We are currently writing the following for each test.

 - auth_username: "admin"
 - auth_password: "district"

Is there a way we could specify that these settings are used for all the tests in that file?

#Simple test file that simply does a GET on the base URL

---
- config: 
    - testset: "ME API" #Name test sets
- test: 
    - group: "api/me"
    - url: "/api/me" #Basic test to see if it is alive
    - auth_username: "admin"
    - auth_password: "district"
    - name: "End points exists"
    - expected_status: [200]
- test: 
    - group: "api/me"
    - url: "/api/me.json" #Basic validation test
    - auth_username: "admin"
    - auth_password: "district"
    - name: "Field validation should pass"
    - validators:
        - compare: {jsonpath_mini: "name",  comparator: "eq",  expected: 'John Traore'}
        - compare: {jsonpath_mini: "organisationUnits.0.name",  comparator: "eq",  expected: 'Sierra Leone'}
        - extract_test: {jsonpath_mini: "key_should_not_exist",  test: "not_exists"}

PyPy/Pip registration

  • Finish configuration for PyPi registration
  • Fix pip install/uninstall issues
  • Verify install on Ubuntu, Fedora 20, RHEL6, RHEL7

Reuse a value extracted from jsonpath_mini on a different test set

Thanks for this excellent test tool. I did see that, we could extract HTTP response and use them in validators. Is there a method to extract the value and use in subsequent test set(use it as a variable).

I tried jsonpath_mini with variable_binds but that did not work. Can you provide a example for that ?

--Thanks

v2 Part 1: Refactor nightmarish parsing configuration to be DRY and elegant

The way parsing is handled for the core tests is a nightmare and seriously impedes the ability to extend pyresttest. We have a special case for each parse option, and the handling of command-line, test-level, and testset options is completely bonkers, since has separate parsing code.

Let's apply lessons learnt to fix this. Instead of the long, brutish mess of parsing (and test of the parsing), let's apply the lessons learned in the validator classes, and apply a registry + dynamic parse function method. But, let's do it better since type coercion is required.

Use registries:

RESERVED_KEYWORDS = set('test', 'blah', 'bleh')  # Cannot be used as a macro name
MACROS = {test: parse_test, benchmark: parse_benchmark}  # Consider carefully
STEPS = {}  # Currently not used, used when run methods get subdivided, these will be components of a macro
OPTIONS = {   # Can occur at global, macro, or testset level
  'name': (coerce_function, type, default, duplicate_allowed)
}

Need to think about options, but items will be iteratively run through each dictionary looking for matches and then handed to parse functions. For options, if duplicates are added, they'll be combined together.

For the command-line ("global") arguments, they need to get mapped to an option too, but it'll be more flexible.

This also acts as a useful and compatibility-maintaining precursor to the nested context structure from #45.

Work Items

  • Create macro registry in macros (but no loading yet) - these are name-to-macro-class mappings
    • We don't just bind to the parse function, we can do more with the class itself (besides calling its parse)
  • Refactor testset parsing to use the Macro.parse static method, and to call it for the test/benchmark parsing
  • Refactor to use common methods and scope-order-resolution for the macro fields via resolve_option in macros.py

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.