svanoort / pyresttest Goto Github PK
View Code? Open in Web Editor NEWPython Rest Testing
License: Apache License 2.0
Python Rest Testing
License: Apache License 2.0
Im looking for import examples to reuse my configurations. I tried many ways I thought it could be done but none of that worked. It would be nice to see some examples on the URL as a mail level syntax element too.
As the wiki say:
There are 5 top level test syntax elements:
url: a simple test, fetches given url via GET request and checks for good response code
Currently headers on requests are parsed and stored into a dictionary of key, value pairs.
It would be nice to convert that to a list of (key,value) tuples to support setting duplicate headers (example cookie values).
This requires changing the tests & test_tests classes, changing how headers are parsed, stored and passed to pycurl. Which sounds like a lot but is, in fact, super-duper easy.
if head: #Convert headers dictionary to list of header entries, tested and working
headers = [str(headername)+':'+str(headervalue) for headername, headervalue in head.items()]
# BECOMES
if head: #Convert headers dictionary to list of header entries, tested and working
headers = [str(headername)+':'+str(headervalue) for (headername, headervalue) in head]
Note that for back-compatibility, we need to support still receiving a yaml dictionary of header values for test config.
As a pyresttest user I'd like to be able to parallelize the test execution (parallel HTTP calls).
TL;DR Summary of Analysis
Need to start working out code for the above.
Precursor: using curl reset when reusing curl handles.
Look at using CurlMulti, see example:
https://github.com/Lispython/pycurl/blob/master/examples/retriever-multi.py
See also: https://github.com/tornadoweb/tornado/blob/master/tornado/curl_httpclient.py
PyCurl Multi Docs: http://pycurl.sourceforge.net/doc/curlmultiobject.html#curlmultiobject
LibCurl: http://curl.haxx.se/libcurl/c/libcurl-multi.html
Using multiprocessing pools for process-parallel execution: http://stackoverflow.com/questions/3842237/parallel-processing-in-python.
Concurrency should be managed at a testset level. Why below.
Config syntax:
---
- config:
concurrency: all # Maximum, one thread per test run
concurrency: 1 # single thread, always serial
concurrency: none # another way to ensure serial
concurrency: -1 # yes, this is serial too, as is anything <= 1
concurrency: 4 # Up to 4 requests at once
concurrency: 16 # Up to 16 requests at once, if that many tests exist
Implementation:
All initial parsing runs, then we decided how to execute (serial or concurrent).
For concurrent, I see 4 levels of concurrency, with increasing concurrent resource use and performance, but increasing complexity:
Analysis:
Test overhead:
time python profile_basic_test.py
Decision Point:
Not_Equals and NE don't work, they're set to 'operator.eq' in validators.py
Hi,
I installed your package as per your readme instructions and after running through the quickstart I started to play with my own API - a call to a Basic Auth secured function using the following YAML:
Running with the command 'resttest.py http://10.1.1.84:80 apitest2.yaml --log debug --interactive true --print-bodies true' gives the following:
REQUEST:
GET http://10.1.1.84:80/api/session/
Press ENTER when ready:
DEBUG:Initial Test Result, based on expected response code: False
RESPONSE:
Unauthorized Access
DEBUG:{"test": {"expected_status": [200], "_headers": {}, "group": "Session", "name": "Get /session/", "_url": "http://10.1.1.84:80/api/session/", "templated": {}}, "failures": [{"failure_type": "Invalid HTTP Response Code", "message": "Invalid HTTP response code: response code 401 not in expected codes [[200]]", "validator": null, "details": null}], "response_code": 401, "body": "Unauthorized Access", "passed": false}
ERROR:Test Failed: Get /session/ URL=http://10.1.1.84:80/api/session/ Group=Session HTTP Status Code: 401
Test Group Session FAILED: 0/1 Tests Passed!
No sign of the auth info in the data. So I had a look at the code, got confused as it seemed fine then ran the resttest.py script directly in python (after adding the line 'print "AUTH: %s:%s" % (mytest.auth_username, mytest.auth_password)' at line 268 in /pyrestest/resttest.py) to get the following:
REQUEST:
GET http://10.1.1.84:80/api/session/
AUTH: [email protected]:pass
Press ENTER when ready:
DEBUG:Initial Test Result, based on expected response code: True
DEBUG:no validators found
RESPONSE:
{
"duration": 43200,
"friendly_name": "A Wade",
"token": "eyJhbGciOiJIUzI1NiIsImV4cCI6MTQyMzg2OTk4NCwiaWF0IjoxNDIzODI2Nzg0fQ.eyJlbWFpbCI6ImFyY2hpZS53YWRlQGFyaWEtbmV0d29ya3MuY29tIn0.NaoVU3gM-JDEy162R1TcBpZgzkpgmsGwqryJTSpm1rI"
}
DEBUG:{"body": "{\n "duration": 43200, \n "friendly_name": "A Wade", \n "token": "eyJhbGciOiJIUzI1NiIsImV4cCI6MTQyMzg2OTk4NCwiaWF0IjoxNDIzODI2Nzg0fQ.eyJlbWFpbCI6ImFyY2hpZS53YWRlQGFyaWEtbmV0d29ya3MuY29tIn0.NaoVU3gM-JDEy162R1TcBpZgzkpgmsGwqryJTSpm1rI"\n}", "response_headers": {"content-length": "235", "expires": "Fri, 13 Feb 2015 11:26:23 GMT", "server": "nginx/1.4.6 (Ubuntu)", "connection": "close", "pragma": "no-cache", "cache-control": "no-cache", "date": "Fri, 13 Feb 2015 11:26:24 GMT", "content-type": "application/json"}, "response_code": 200, "passed": true, "test": {"expected_status": [200], "_headers": {}, "group": "Session", "name": "Get /session/", "_url": "http://10.1.1.84:80/api/session/", "templated": {}, "auth_password": "pass", "auth_username": "[email protected]"}, "failures": []}
Test Group Session SUCCEEDED: 1/1 Tests Passed!
Which is what I expected originally. So why would the Auth work fine on one run through but not the other? I suspect the install is pointing to an older version of the script somehow, but I don't know enough to prove that.
Also, running the test:
Leads to :
Traceback (most recent call last):
File "resttest.py", line 720, in
command_line_run(sys.argv[1:])
File "resttest.py", line 716, in command_line_run
main(args)
File "resttest.py", line 685, in main
failures = run_testsets(tests)
File "resttest.py", line 547, in run_testsets
result = run_test(test, test_config = myconfig, context=context)
File "resttest.py", line 250, in run_test
curl = templated_test.configure_curl(timeout=test_config.timeout, context=my_context)
File "/home/archie/Projects/APITesting/pyresttest/pyresttest/tests.py", line 264, in configure_curl
curl.setopt(curl.READFUNCTION, StringIO(bod).read)
TypeError: must be string or buffer, not None
presumably because of (intentional but perhaps misguided) lack of a body in the PUT call.
I really like the package though, it will be very useful in the future, so thank you very much for building it and putting it up here.
Cheers,
Wade
As a developer, I want to check the arguments of supplied functions when registering extension matches what I'd expect. (IE a comparator takes 2 args, etc)
I'd like to see, specifically:
WARNING:Failed to load jsonschema validator, make sure the jsonschema module is installed if you wish to use schema validators.
Test Group Default SUCCEEDED: 1/1 Tests Passed!
If i am not using json schema validation, how do i remove this warning?
It would be useful in my case to be able to validate that a server is returning a response of the correct type before validating the actual request body.
Hello --
We've been trying to use pyresttest for testing a Spring web application, where for several operations a CSRF token needs to be used. pyresttest seems to be perfect for the task, but we couldn't get the header extraction work as it should for some reason...
---
- config:
- testset: "Example test set"
- test:
- name: "Initial call to get the CSRF"
- url: "/get_token"
- extract_binds:
- token: { header: "x-csrf-token"}
- test:
- name: "Test call with token"
- url: { template : "/example_endpoint/$token/" }
I'm testing with pyresttest http://localhost:8081 test.yaml --interactive=true --log=debug
to see the calls and I see that it gives the following error message:
ERROR:Test Failed: Test with token URL=http://localhost:8081/example_endpoint/$token/ Group=Default HTTP Status Code: 404
...so the templating is surely off somewhere a bit.
Could someone please explain why this happens and how I could fix the above example to make it work? I would greatly appreciate any help.
Kind regards,
Zoltán
I am trying to extract 'company_id' from one test and use it as header in the second test. is it possible?tx
Hi,
I would like to use pyresttest in our CI. is there currently a suuport for xml test results generation?
Thanks
This would make easy for developers to debug the requests and tweak them to match the real needs of the tests.
Maybe a paramenter like: --print-curl-request=true
The following test fails when the value keys are zero on the result
- config:
- testset: "Finance"
- generators:
- 'ACCESSTOKEN': {type: 'env_variable', variable_name: ACCESSTOKEN}
- test:
- group: "Revenue"
- name: "Get Revenue"
- url: "/rpc/v1/finance.get-revenue.json"
- method: "POST"
- expected_status: [200]
- headers: {template: {"Authorization": "$ACCESSTOKEN"}}
- validators:
- extract_test: {jsonpath_mini: "0.id", test: "exists"}
- extract_test: {jsonpath_mini: "0.dueDate", test: "exists"}
- extract_test: {jsonpath_mini: "0.account", test: "exists"}
- extract_test: {jsonpath_mini: "0.account_name", test: "exists"}
- extract_test: {jsonpath_mini: "0.person", test: "exists"}
- extract_test: {jsonpath_mini: "0.client_name", test: "exists"}
- extract_test: {jsonpath_mini: "0.description", test: "exists"}
- extract_test: {jsonpath_mini: "0.portion", test: "exists"}
- extract_test: {jsonpath_mini: "0.repeatTimes", test: "exists"}
- extract_test: {jsonpath_mini: "0.value", test: "exists"}
- extract_test: {jsonpath_mini: "0.creationDate", test: "exists"}
- extract_test: {jsonpath_mini: "0.status", test: "exists"}
- extract_test: {jsonpath_mini: "0.expired", test: "exists"}
The result:
[
{
"id":1,
"dueDate":"2014-12-24",
"account":4,
"account_name":"Account 1",
"person":1,
"client_name":"Client1",
"description":"Revenue description Test One",
"portion":0,
"repeatTimes":0,
"value":1000,
"creationDate":"2014-12-24",
"status":"-1",
"expired":"0"
}
]
Portion and repeatTimes fails.
Hi,
We've been trying to use pyresttest for testing a web application, but we couldn't get the cookie through the header extracts
- headers: {Content-Type: application/json}
- extract_binds:
- 'Set-Cookie': { header: 'Set-cookie'}
but we could not get any Results
Could someone please explain why this happens and how I could fix the above example to make it work?
As a pyresttest python API user, it's crazy that there are so many inconsistencies in the internal naming, let's fix these:
Add options to support HTTPS / certs on tests.
ERROR:Test Failed: Basic get URL=http://localhost:80/rest/v1/config/af_config_info Group=Default HTTP Status Code: 200
ERROR:Test Failure, failure type: Validator Failed, Reason: Extract and test validator failed on test: exists(None)
ERROR:Validator/Error details:Extractor: Extractor Type: jsonpath_mini, Query: "0.af_config_info", Templated?: False
ERROR:Test Failure, failure type: Validator Failed, Reason: Extract and test validator failed on test: exists(None)
{ "errorcode": 0, "message": "Done", "additionalInfo": { "cert_present": "false" }, "af_config_info": [ { "propkey": "NS_INSIGHT_LIC", "propvalue": "1" }, { "propkey": "CB_DEPLOYMENT", "propvalue": "FALSE" }, { "propkey": "NS_DEPLOYMENT", "propvalue": "TRUE" }, { "propkey": "CR_ENABLED", "propvalue": "0" }, { "propkey": "SLA_ENABLED", "propvalue": "1" }, { "propkey": "URL_COLLECTION_ENABLED", "propvalue": "1" }, { "propkey": "HTTP_DOMAIN_ENABLED", "propvalue": "1" }, { "propkey": "USER_AGENT_ENABLED", "propvalue": "1" }, { "propkey": "HTTP_REQ_METHOD_ENABLED", "propvalue": "1" }, { "propkey": "HTTP_RESP_STATUS_ENABLED", "propvalue": "1" }, { "propkey": "OPERATING_SYSTEM_ENABLED", "propvalue": "1" }, { "propkey": "REPORT_TIMEZONE", "propvalue": "local" }, { "propkey": "SERVER_UTC_TIME", "propvalue": "1438947611" }, { "propkey": "MEDIA_TYPE_ENABLED", "propvalue": "1" }, { "propkey": "CONTENT_TYPE_ENABLED", "propvalue": "1" } ] }
As a benchmark user, I want to see benchmarks output to terminal in CSV format if I've set file output to CSV
Extensions should allow adding generators, validators, etc
Parts:
(This is currently underway)
I need to send a env-variable through headers but templating with generators simply does not do the job.
Currently the schema validation module returns an exception when schema validation fails, rather than printing all errors.
As a json schema validation user, let's fix that!
Performance enhancement
Add a caching layer to:
This will enable appropriate lazy-load of files and content, but still allow for dynamism.
For options, take a look at builtin WeakValueDictionary
Looks like it's using some simple service but details on how to run the test are missing. If it's something that has to be setup, would be nice to know how. Even better if it could be independent of any setup by the user. Maybe deploy the target service somewhere (OpenShift Online?) or use a stable public api?
See functionaltests.py, calling log_failure does not correctly print log of failure.
The idea here is not to ask users for a complex set of arguments when running benchmarks, just generate a tidy report for the most common use case.
A la palb.
This offers an easy starting point for users looking for a 'basic benchmark' and we already have all the hooks to collect the stats used here, anyway. Well, aside from 50%/75%/99% timings but that isn't too hard. Plus they get all the magic with generators/templating/special configuration/etc that we offer for pyresttest.
Open question: how to modify syntax for this benchmark? Define a simple 'basic benchmark' implementing this, to avoid argument collisions with benchmark?
Average Document Length: 23067 bytes
Concurrency Level: 4
Time taken for tests: 6.469 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 2306704 bytes
Requests per second: 15.46 [#/sec] (mean)
Time per request: 250.810 [ms] (mean)
Time per request: 62.702 [ms] (mean, across all concurrent requests)
Transfer rate: 348.22 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 43 46 3.5 45 73
Processing: 56 205 97.2 188 702
Waiting: 90 100 12.6 94 138
Total: 106 251 97.1 234 750
Percentage of the requests served within a certain time (ms)
50% 239
66% 257
75% 285
80% 289
90% 313
95% 382
98% 613
99% 717
100% 768 (longest request)
At this point, it's important to have automation for testing, and especially for platform compatibility before releases. The best way to do this is via Jenkins, but there are some things that need to happen to get there.
See Stack Overflow post on python unittests in Jenkins.
This is a necessary precursor for automated testing of scripts. It p
Done:
TODO:
Future (blocked by other things):
Hi. I want to be able to test posting form data fields.
For example in postman you would click form-data and fill in the keys/values.
How can I do it with pyresttest? I can't seem to get it to work.
Thanks.
Is it possible to get the authorization header from the response and use it in subsequent requests ?
Hi,
Would like to reuse of first GET response fileds into next GET request REST url params.
Assume it returns list of devices with <Id, name > in response body as
"device": [{ id : AUTO_GENRATED_ID1, name: abc} , {id:AUTOGENERATED_ID2 , name : xyz} ].
For the next test I like to GET info about ID: AUTOGENERATED_ID1 as REST URL request
How to write yaml to acheive it?
Also for dynamic REST URL params how can we define YAML?
ex: url/current_stock_price?Date:23062015
for the date param i want pass user defined date without modifying YAML file. Like in shell script
read argument or execute "date" command and pass it as argument to shell script. can we do similar to this in this framework?
Thanks in advance.
I kow it relates with (this)[https://github.com//issues/39] generic issue but I need a specific answer.
- config:
- testset: "Inventory Group"
- generators:
- 'ACCESSTOKEN': {type: 'env_variable', variable_name: ACCESSTOKEN}
- test:
- group: "Group"
- name: "Post Group"
- url: "/rpc/v1/inventory.post-group.json"
- method: "POST"
- expected_status: [200]
- body: "name=NewGroup"
- headers: {template: {"Authorization": "$ACCESSTOKEN"}}
- validators:
- compare: {jsonpath_mini: "status", comparator: "eq", expected: 1}
- test:
- group: "Group"
- name: "Get group"
- url: "/rpc/v1/inventory.get-group.json"
- method: "POST"
- expected_status: [200]
- headers: {template: {"Authorization": "$ACCESSTOKEN"}}
- validators:
- extract_test: {jsonpath_mini: "0.id", test: "exists"}
- extract_test: {jsonpath_mini: "0.name", test: "exists"}
- extract_binds:
- 'CURRENTID': {'jsonpath_mini': 'LAST.id'}
- test:
- group: "Group"
- name: "Update group"
- url: "/rpc/v1/inventory.put-group.json"
- method: "POST"
- body: {template: "name=NewName&id=$CURRENTID"}
- expected_status: [200]
- headers: {template: {"Authorization": "$ACCESSTOKEN"}}
- validators:
- compare: {jsonpath_mini: "status", comparator: "eq", expected: 1}
There's a way to get the last entry of the array that the get-group
method returns? Something to replace the LAST
key on the tests.
I need a stable version to use on my project. Till now I was using master but it eventually breaks and my Jenkins go crazy.
It looks like based on #63 there is a potential for extractor processing to fail without printing an error or exception in the process.
We need to add a logging statement and test for the same, to ensure this is captured.
See tests.py, lines 180-184 (note lack of logging, just print statement)
When can we expect Jenkins CI feature and https support in this framework?
Thanks
- config:
- testset: "Inventory Group"
- generators:
- 'ACCESSTOKEN': {type: 'env_variable', variable_name: ACCESSTOKEN}
- test:
- group: "Group"
- name: "Post Group"
- url: "/rpc/v1/inventory.post-group.json"
- method: "POST"
- expected_status: [200]
- body: "name=NewGroup"
- headers: {template: {"Authorization": "$ACCESSTOKEN"}}
- validators:
- compare: {jsonpath_mini: "status", comparator: "eq", expected: 1}
- test:
- group: "Group"
- name: "Get group"
- url: "/rpc/v1/inventory.get-group.json"
- method: "POST"
- expected_status: [200]
- headers: {template: {"Authorization": "$ACCESSTOKEN"}}
- validators:
- extract_test: {jsonpath_mini: "0.id", test: "exists"}
- extract_test: {jsonpath_mini: "0.name", test: "exists"}
- variable_binds: {CURRENTID: "0.id"}
- test:
- group: "Group"
- name: "Update group"
- url: "/rpc/v1/inventory.put-group.json"
- method: "POST"
- body: {template: "name=NewName&id=$CURRENTID"}
- expected_status: [200]
- headers: {template: {"Authorization": "$ACCESSTOKEN"}}
- validators:
- compare: {jsonpath_mini: "status", comparator: "eq", expected: 1}
Given this exemple, there's a way of getting the 0.id and set is as CURRENTID ?
I am looking at using this for building functional tests of a JSON-based API. The one feature I would like to see is the option of verifying json output of the API against a json-schema.
Looking at the documentation, perhaps this could be implemented as an extractor. I may take a look at it when I get using pyresttest.
I am currently considering changes for the next version of PyRestTest (version 2). In order to extend the framework, I would like to make some design changes, which may break back compatibility (generally in small ways).
Because of this, I'm throwing this out for community/user comment. Please comment on this issue, to weigh in with your thoughts and if one of these would cause problems for your uses.
The combination of the above steps should make the framework easier to use and more powerful.
Templating performance comparison:
http://stackoverflow.com/questions/1324238/what-is-the-fastest-template-system-for-python
We need a tutorial on this with examples of binding/extraction/generators, I think.
It trips people up, and to be honest the existing documentation is overly technical.
User Story:
As a PyRestTest user, I'd like to be able to see why a test failed without diving into logging levels, in a human-readable way.
Add FailureReason objects to TestResult & BenchmarkResult objects & their execution methods
As a user, I would like to be able to execute tests in parallel, or with different reporting, logging, and execution pipelines (say, optimized execution for load testing).
I might also want to implement a pre-compiler/pre-validator for benchmark/test sets that limits templating and curl reconfiguration where possible. This is more efficient than trying to include lots of if/then in running methods.
Code requirements:
@svanoort : we are looking for complete automation suite with multiple yaml files and with testcase ID's
and can we get one summary for all the YAML files at one place.
Hello,
Just getting started, but thanks for what looks like a great tool! :)
When writing multiple tests against the same api that requires authorization. We are currently writing the following for each test.
- auth_username: "admin"
- auth_password: "district"
Is there a way we could specify that these settings are used for all the tests in that file?
#Simple test file that simply does a GET on the base URL
---
- config:
- testset: "ME API" #Name test sets
- test:
- group: "api/me"
- url: "/api/me" #Basic test to see if it is alive
- auth_username: "admin"
- auth_password: "district"
- name: "End points exists"
- expected_status: [200]
- test:
- group: "api/me"
- url: "/api/me.json" #Basic validation test
- auth_username: "admin"
- auth_password: "district"
- name: "Field validation should pass"
- validators:
- compare: {jsonpath_mini: "name", comparator: "eq", expected: 'John Traore'}
- compare: {jsonpath_mini: "organisationUnits.0.name", comparator: "eq", expected: 'Sierra Leone'}
- extract_test: {jsonpath_mini: "key_should_not_exist", test: "not_exists"}
Feature: improve logging by defining a log hierarchy for the different modules (validators, generators, binding, executors, etc)
Something in /usr/bin ?
Thanks for this excellent test tool. I did see that, we could extract HTTP response and use them in validators. Is there a method to extract the value and use in subsequent test set(use it as a variable).
I tried jsonpath_mini with variable_binds but that did not work. Can you provide a example for that ?
--Thanks
The way parsing is handled for the core tests is a nightmare and seriously impedes the ability to extend pyresttest. We have a special case for each parse option, and the handling of command-line, test-level, and testset options is completely bonkers, since has separate parsing code.
Let's apply lessons learnt to fix this. Instead of the long, brutish mess of parsing (and test of the parsing), let's apply the lessons learned in the validator classes, and apply a registry + dynamic parse function method. But, let's do it better since type coercion is required.
Use registries:
RESERVED_KEYWORDS = set('test', 'blah', 'bleh') # Cannot be used as a macro name
MACROS = {test: parse_test, benchmark: parse_benchmark} # Consider carefully
STEPS = {} # Currently not used, used when run methods get subdivided, these will be components of a macro
OPTIONS = { # Can occur at global, macro, or testset level
'name': (coerce_function, type, default, duplicate_allowed)
}
Need to think about options, but items will be iteratively run through each dictionary looking for matches and then handed to parse functions. For options, if duplicates are added, they'll be combined together.
For the command-line ("global") arguments, they need to get mapped to an option too, but it'll be more flexible.
This also acts as a useful and compatibility-maintaining precursor to the nested context structure from #45.
Is it possible to follow a url redirect using pyresttest?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.