Write Selenium and Appium tests in Python using the Page Object pattern. This Pythonic GUI and API test automation framework will help you get started with QA automation quickly. It comes with many useful integrations like - email, BrowserStack, Slack, TestRail, etc. This repository is developed and maintained by Qxf2 Services.
In the exception clause of smart_wait of page_objects/Base_Page.py, the conditional_write has a variable called wait_time. I believe it should be 'wait_seconds' instead
MacOS v10.14.5
Try to install dependencies but failed: pip3 install -r requirements.txt to install dependencies Collecting to Using cached https://files.pythonhosted.org/packages/05/8b/949ffc9ca52aca12b21985f6d97e7946c8c284a638ee15d8851592014fbe/to-0.3.tar.gz Collecting install ERROR: Could not find a version that satisfies the requirement install (from versions: none) ERROR: No matching distribution found for install
Test boilerplate should have one log message clearly indicating the start of the test and its date. We just append log messages to the test's log. That makes it hard for me to know where the previous test ended and where the re-run started.
Can we please add a log message to all our example tests to show where the test started. It would be good if the log message had a clear, human-readable timestamp (so not Unix time) to tell me when the test kicked off. Make sure that the visual representation of the message is clear and easy to spot when scrolling through logs. E.g.: Add a couple of new lines and some textual decorator and use CAPS.
I am getting a warning message from conftest.py file
'PytestDeprecationWarning: the pytest.config global is deprecated. Please use request.config or pytest_configure (if you're a pytest plugin) instead.
if pytest.config.getoption("-I").lower() == 'y':`
Slack integration is fantastic. But I can see some use cases where we may want to email a set of people. Can we add a hook to pytest so it emails results?
I only see Base_Page.write() as the calling modulein the log messages. That makes debugging really hard. I think this broke with the introduction of loguru but I could be wrong.
Example of a bad log message:
2019-07-11 16:57:08.656 | INFO | utils.Base_Logging:write:60 - conditional_write | Found the expected item 'Gary Bio Sandalwood SPF-50' in the cart
^Notice that utils.Base_Logging:write:60 - conditional_write is completely useless. I want to know something more about who called that message.
test_api_example.py does not take command line options. This is a break in pattern from our other tests. Please add options in main for the test.
You will notice that you cannot simply do python tests/test_api_example.py -U http://localhost:5000 to run the test. We seem to need to use pytest (-A option is not documented anywhere) to send a URL. Reporting on behalf of @rajeshyelwal who noticed this issue.
In the def post(self, url, data=None, headers={}) method line number 11 i.e 63 rd line in the Base_Mechanize.py file
while i was using this file for Engineering Benchmark api test.
i am getting the error on line number 63. (Python says:isinstance expected 2 arguments, got 3)
so i removed mechanize.URLError and it worked for me.
Please fix the issue
Change the way we figure out the name of the calling module. We seem to be using non-relative array indexes (e.g.: [4]) and that won't work as we keep adding pytest plugins. We should try to do something more intelligent. E.g.: Check for the string 'test_'?
I noticed in at least one object (form object) that we are calling exception decorator before the screenshot decorator. This results in the screenshot names becoming inner.png instead of the more useful <method_name>.png. Fix it by changing the order of the decorators.
test_successive_form_creation.py should create multiple forms as defined in the form_list in the conf file. Instead only last form is getting created from the list.
Developing GUI tests can be a pain. Can we do something to make it easier?
I know that within Qxf2, we have re-used Selenium sessions and also used pdb to step into code. Can we bring those techniques into the framework itself?
I wrote a simple test for login and logout of an application. I used the presence of an element after login to verify that the login happened successfully. In other words, I used check_element_present(). When i gave the wrong password, the failure summary is misleading because somehow get_element() always appends the exception message even when the exception is expected. Can we move the exception append into the condition that checks for verbose_flag?
I noticed this line: "python tests/test_api_example.py (make sure to run sample cars-api available at qxf2/cars-api repository before api test run)" is bold in the readme. Fix it.
Due to warning message " UserWarning: name used for saved screenshot does not match file type. It should end with a .png extension"type. It should end with a .png extension", UserWarning)".
I used a tool called PAMIE a long while ago. One of its nice features was that it gave a thick yellow highlight around the element being operated upon.
Can we develop a feature like that? The user should be able to turn on the feature when they are developing and then turn it off during actual test runs.
We don't have an easy and intuitive way to write a test from scratch. Can we add a simple test_boilerplate.py file to the tests folder? Make sure to add it to the pytest ignore options.
Btw, let us not add any testrail or tesults boiler plate code. Just imports, method signature, expected/actual pass logic, creating the first page object, registering the driver, creating the test summary, assert and main should do.
The optparse module used in the Option_Parser utility class is deprecated as of Python 2.7. The command line option parsing functions implemented in the Option_Parse class using optparse module can achieved using the new argparse