Git Product home page Git Product logo

pytest-jira-xray's People

Contributors

fundakol avatar ronanviel avatar stefanoborini avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pytest-jira-xray's Issues

Is it possible to attach a screenshot of failed tests?

Hi, I have two questions related to attaching logs & screenshots to test execution.

  1. Is it possible to attach a screenshot of the failed tests on x-ray test execution?
  2. Right now, I see logs of the failed tests under the "comment" section and the logs are not complete - only a small portion of the log messages are attached. Is it possible to send a complete test execution log?

image

Can't send results to Jira Cloud from Intellij

XRAY_API_BASE_URL, XRAY_API_PASSWORD and XRAY_API_USER are specified but I got next error
requests.exceptions.JSONDecodeError: [Errno Expecting value] Basic authentication with passwords is deprecated.

if I trying to use Jira API token or/and clientid/client_secret, I get the same error.

Unclear instructions on how to upload test evidence

Hello @fundakol , thanks a lot for this tool. It seems like it will save me a lot of hassle.

I'm having some issues understanding how to properly attach evidence to a test report which perhaps you can help me with.

My use case is to have the tests run in a bitbucket pipeline which will then upload the results to a new XRAY execution.

I already have that bit working and the new execution is created for each run, I could even correctly set the version.

However I am having some issues with adding evidence to these test runs.

My test_script so far is very much like the one you have in your example:


@pytest.mark.xray('JIRA-1')
def test_foo():
    assert True```

However following your example in the [documentation](https://github.com/fundakol/pytest-jira-xray#attach-test-evidences) I can't really get it to work. I'm also struggling a bit to understand how this hook is supposed to work, when is it supposed to be called, and how can I add evidences from the tests. Where is the `outcome` supposed to come from, how can each test add to the evidences etc etc. Right now I'm mostly just focusing on the simple stuff like the log output, so nothing fancy.

In the pipeline I run the following commands:

pytest --junitxml=test-result.xml dummy_test.py
pytest --jira-xray --client-secret-auth --cloud


Also I would like to know what is the recommended way to set the env variables like `XRAY_EXECUTION_SUMMARY` or if there's an alternative and convenient way to do this from within the test script for example.

Many thanks in advance!!

X-Ray Authentication failure due to SSL

Is it possible to also pass the ENV_XRAY_API_VERIFY_SSL parameter to the ClientSecretAuth class here - https://github.com/fundakol/pytest-jira-xray/blob/master/src/pytest_xray/xray_publisher.py#L40 that is advised here - psf/requests#6071 (comment)

OR

What's the alternative to set it to False. I tried few methods like creating a fixture in conftest and setting session.verify in requests.Session() as False but didn't work. None of the solutions are able to set the SSL to false. Could you please advice?

Write data to a local file to send to Jira later?

Getting permission and certificates to send data directly to Jira is a problem where I work. I expect that it will take several months to get approvals. In the meantime, we're still running tests and I'd like to be able to save the results and attach them to a test execution later.

Is this a use case that has been considered, or better, implemented?

Reporting duplicated IDs but not duplicated IDs in the tests

Hi there,

I am trying to report the test results to a list of test cases in a parametrized test, but when I try to execute the tests it reports that there are duplicated IDs of the tests.

I have checked the list and I couldn't find any duplicated ID.

Test:

class TestImportedToDeduplication:
    status_filter = {"status": "user.status LIKE 'DUP_CHECKED' or user.status LIKE 'DUPLICATED'"}

    @pytest.mark.xray(
        [
            "VSDS-14750",
            "VSDS-14752",
            "VSDS-14753",
            "VSDS-14771",
            "VSDS-14772",
            "VSDS-14773",
            "VSDS-14774",
            "VSDS-14775",
            "VSDS-14776",
            "VSDS-14777",
            "VSDS-14778",
            "VSDS-14779",
            "VSDS-14780",
            "VSDS-14781",
            "VSDS-14782",
            "VSDS-14783",
            "VSDS-14784",
            "VSDS-14785",
            "VSDS-14786",
            "VSDS-14787",
            "VSDS-14788",
            "VSDS-14789",
            "VSDS-14790",
            "VSDS-14791",
            "VSDS-14792",
            "VSDS-14793",
            "VSDS-14794",
            "VSDS-14795",
            "VSDS-14796",
            "VSDS-14797",
            "VSDS-14798",
            "VSDS-14799",
            "VSDS-14800",
            "VSDS-14801",
            "VSDS-14802",
            "VSDS-14803",
            "VSDS-14804",
            "VSDS-14805",
            "VSDS-14806",
            "VSDS-14807",
            "VSDS-14808",
            "VSDS-14809",
            "VSDS-14810",
            "VSDS-14811",
            "VSDS-14812",
            "VSDS-14813",
            "VSDS-14814",
            "VSDS-14815",
            "VSDS-14816",
            "VSDS-14817",
            "VSDS-14818",
            "VSDS-14819",
            "VSDS-14820",
            "VSDS-14821",
            "VSDS-14822",
            "VSDS-14823",
            "VSDS-14824",
            "VSDS-14825",
            "VSDS-14826",
            "VSDS-14827",
            "VSDS-14828",
            "VSDS-14829",
            "VSDS-14830",
            "VSDS-14831",
            "VSDS-14832",
            "VSDS-14833",
            "VSDS-14834",
            "VSDS-14835",
            "VSDS-14836",
            "VSDS-14837",
            "VSDS-14838",
            "VSDS-14839",
            "VSDS-14840",
            "VSDS-14841",
            "VSDS-14842",
            "VSDS-14843",
            "VSDS-14844",
            "VSDS-14845",
            "VSDS-14846",
            "VSDS-14847",
            "VSDS-14848",
            "VSDS-14849",
            "VSDS-14850",
            "VSDS-14851",
            "VSDS-14852",
            "VSDS-14853",
            "VSDS-14854",
            "VSDS-14855",
            "VSDS-14856",
            "VSDS-14857",
            "VSDS-14858",
            "VSDS-14859",
            "VSDS-14860",
            "VSDS-14861",
            "VSDS-14862",
            "VSDS-14863",
            "VSDS-14864",
            "VSDS-14865",
            "VSDS-14866",
            "VSDS-14867",
            "VSDS-14868",
            "VSDS-14869",
            "VSDS-14870",
            "VSDS-14871",
            "VSDS-14872",
            "VSDS-14873",
            "VSDS-14874",
            "VSDS-14875",
            "VSDS-14876",
            "VSDS-14877",
            "VSDS-14878",
            "VSDS-14879",
            "VSDS-14880",
            "VSDS-14881",
            "VSDS-14882",
            "VSDS-14883",
            "VSDS-14884",
            "VSDS-14885",
            "VSDS-14886",
            "VSDS-14887",
            "VSDS-14888",
            "VSDS-14889",
            "VSDS-14890",
            "VSDS-14891",
            "VSDS-14892",
            "VSDS-14893",
            "VSDS-14894",
            "VSDS-14895",
        ]
    )
    @pytest.mark.parametrize("credit_reference_id", [False, True])
    @pytest.mark.parametrize("session_start", [False, True])
    @pytest.mark.parametrize("rfid", [False, True])
    @pytest.mark.parametrize("emaid", [False, True])
    @pytest.mark.parametrize("evcoid", [False, True])
    @pytest.mark.parametrize("evseid", [False, True])
    @pytest.mark.parametrize("external_cdr_id", [False, True])
    @pytest.mark.parametrize("getCDR_fixt", [["NEW"]], indirect=["getCDR_fixt"])
    @pytest.mark.parametrize(
        "setup_asb_and_sub_management",
        [[status_filter]],
        indirect=["setup_asb_and_sub_management"],
    )
    def test_NEW_to_DUP_CHECKED(
        self,
        setup_asb_and_sub_management,
        getCDR_fixt,
        copy_cdr_with_updated_fields,
        external_cdr_id,
        evseid,
        evcoid,
        rfid,
        emaid,
        session_start,
        credit_reference_id,
    ):
...
test-content
...

Versions

$ python --version
Python 3.10.0
$ pip freeze | grep pytest
pytest==7.2.0
pytest-cov==3.0.0
pytest-html==3.1.1
pytest-jira-xray==0.8.8
pytest-metadata==2.0.1
pytest-xdist==3.3.1

Logs:

$ pytest -c tests_data/deroem/dev/pytest.ini --rootdir=tests_middleware/ tests_middleware/mqs_updates_cdr/test_cdr_mw_dedup.py --jira-xray --api-key-auth --testplan VSDS-14487
============================================================================================================ test session starts ============================================================================================================ platform win32 -- Python 3.10.0, pytest-7.2.0, pluggy-1.0.0 -- C:\Users\alejandro.mendez\AppData\Local\Programs\Python\Python310\python.exe
cachedir: .pytest_cache
metadata: {'Python': '3.10.0', 'Platform': 'Windows-10-10.0.22000-SP0', 'Packages': {'pytest': '7.2.0', 'py': '1.11.0', 'pluggy': '1.0.0'}, 'Plugins': {'cov': '3.0.0', 'html': '3.1.1', 'metadata': '2.0.1', 'jira-xray': '0.8.8', 'xdist': '3.3.1', 'tavern': '1.25.2'}, 'JAVA_HOME': 'C:\\Program Files\\Java\\jdk-11.0.10'}
rootdir: C:\Users\alejandro.mendez\workspace\middleware-testing\tests_middleware, configfile: ..\tests_data\deroem\dev\pytest.ini
plugins: cov-3.0.0, html-3.1.1, metadata-2.0.1, jira-xray-0.8.8, xdist-3.3.1, tavern-1.25.2
collected 128 items
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Local\Programs\Python\Python310\lib\site-packages\_pytest\main.py", line 270, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Local\Programs\Python\Python310\lib\site-packages\_pytest\main.py", line 323, in _main
INTERNALERROR>     config.hook.pytest_collection(session=session)
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Roaming\Python\Python310\site-packages\pluggy\_hooks.py", line 265, in __call__
INTERNALERROR>     return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Roaming\Python\Python310\site-packages\pluggy\_manager.py", line 80, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Roaming\Python\Python310\site-packages\pluggy\_callers.py", line 60, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Roaming\Python\Python310\site-packages\pluggy\_result.py", line 60, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Roaming\Python\Python310\site-packages\pluggy\_callers.py", line 39, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Local\Programs\Python\Python310\lib\site-packages\_pytest\main.py", line 334, in pytest_collection
INTERNALERROR>     session.perform_collect()
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Local\Programs\Python\Python310\lib\site-packages\_pytest\main.py", line 667, in perform_collect
INTERNALERROR>     hook.pytest_collection_modifyitems(
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Roaming\Python\Python310\site-packages\pluggy\_hooks.py", line 265, in __call__
INTERNALERROR>     return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Roaming\Python\Python310\site-packages\pluggy\_manager.py", line 80, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Roaming\Python\Python310\site-packages\pluggy\_callers.py", line 60, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Roaming\Python\Python310\site-packages\pluggy\_result.py", line 60, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Roaming\Python\Python310\site-packages\pluggy\_callers.py", line 39, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Local\Programs\Python\Python310\lib\site-packages\pytest_xray\xray_plugin.py", line 169, in pytest_collection_modifyitems
INTERNALERROR>     self._verify_jira_ids_for_items(items)
INTERNALERROR>   File "C:\Users\alejandro.mendez\AppData\Local\Programs\Python\Python310\lib\site-packages\pytest_xray\xray_plugin.py", line 93, in _verify_jira_ids_for_items
INTERNALERROR>     raise XrayError(f'Duplicated test case ids: {duplicated_jira_ids}')
INTERNALERROR> pytest_xray.exceptions.XrayError: Duplicated test case ids: ['VSDS-14750', 'VSDS-14752', 'VSDS-14753', 'VSDS-14771', 'VSDS-14772', 'VSDS-14773', 'VSDS-14774', 'VSDS-14775', 'VSDS-14776', 'VSDS-14777', 'VSDS-14778', 'VSDS-14779', 'VSDS-14780', 'VSDS-14781', 'VSDS-14782', 'VSDS-14783', 'VSDS-14784', 'VSDS-14785', 'VSDS-14786', 'VSDS-14787', 'VSDS-14788', 'VSDS-14789', 'VSDS-14790', 'VSDS-14791', 'VSDS-14792', 'VSDS-14793', 'VSDS-14794', 'VSDS-14795', 'VSDS-14796', 'VSDS-14797', 'VSDS-14798', 'VSDS-14799', 'VSDS-14800', 'VSDS-14801', 'VSDS-14802', 'VSDS-14803', 'VSDS-14804', 'VSDS-14805', 'VSDS-14806', 'VSDS-14807', 'VSDS-14808', 'VSDS-14809', 'VSDS-14810', 'VSDS-14811', 'VSDS-14812', 'VSDS-14813', 'VSDS-14814', 'VSDS-14815', 'VSDS-14816', 'VSDS-14817', 'VSDS-14818', 'VSDS-14819', 'VSDS-14820', 'VSDS-14821', 'VSDS-14822', 'VSDS-14823', 'VSDS-14824', 'VSDS-14825', 'VSDS-14826', 'VSDS-14827', 'VSDS-14828', 'VSDS-14829', 'VSDS-14830', 'VSDS-14831', 'VSDS-14832', 'VSDS-14833', 'VSDS-14834', 'VSDS-14835', 'VSDS-14836', 'VSDS-14837', 'VSDS-14838', 'VSDS-14839', 'VSDS-14840', 'VSDS-14841', 'VSDS-14842', 'VSDS-14843', 'VSDS-14844', 'VSDS-14845', 'VSDS-14846', 'VSDS-14847', 'VSDS-14848', 'VSDS-14849', 'VSDS-14850', 'VSDS-14851', 'VSDS-14852', 'VSDS-14853', 'VSDS-14854', 'VSDS-14855', 'VSDS-14856', 'VSDS-14857', 'VSDS-14858', 'VSDS-14859', 'VSDS-14860', 'VSDS-14861', 'VSDS-14862', 'VSDS-14863', 'VSDS-14864', 'VSDS-14865', 'VSDS-14866', 'VSDS-14867', 'VSDS-14868', 'VSDS-14869', 'VSDS-14870', 'VSDS-14871', 'VSDS-14872', 'VSDS-14873', 'VSDS-14874', 'VSDS-14875', 'VSDS-14876', 'VSDS-14877', 'VSDS-14878', 'VSDS-14879', 'VSDS-14880', 'VSDS-14881', 'VSDS-14882', 'VSDS-14883', 'VSDS-14884', 'VSDS-14885', 'VSDS-14886', 'VSDS-14887', 'VSDS-14888', 'VSDS-14889', 'VSDS-14890', 'VSDS-14891', 'VSDS-14892', 'VSDS-14893', 'VSDS-14894', 'VSDS-14895']

Any idea about the problem?

Thanks!

Close execution issue after upload

Hello again @fundakol ,

I was wondering if it's possible to close an execution issue after the results have been uploaded, either by default or with a flag.

I'm not sure if this is something that this plugin supports or aims to do so, or if one should do it manually/directly with Jira. I guess in that sense one would need to use also what is in #8 .

Apologies for the issues but hopefully these help other people in the future as well.

Thank you once again ๐Ÿ˜„ !

Edit: In case it's not clear I'm talking about the Jira Execution status, not the PASS, FAILED etc, i.e. in my case a new execution issue is created but it is left in state New. I would like to have it moved to Done immediately as these are generated automatically, so in my case nobody would close them.

I can maybe get around to this by having an Automation within Jira itself, but ideally this wouldn't be needed and it seems like this would be a nice addition to this project.

Edit 2:
Having checked https://docs.getxray.app/display/XRAYCLOUD/Using+Xray+JSON+format+to+import+execution+results it seems that this is not supported on XRAY's side. I'll still wait to hear from you either way ๐Ÿ˜…

Can't get it to work in a Python application

I'm pretty new to Python development and i can't get the login working within an application with PySimpleGUI
Is there any template where i can figure this out?

thanks in advance
Eric

Possibility to mark a test to satisfy two or more test identifiers?

The XRay format wants a string in the json file for the "testKey" entry. Example

 "testKey": "QSOL-1122"

However, it is not uncommon to me that some of my pytest tests check multiple xray tests. This is because I use pytest to run selenium, and it uses considerable time so it pays off to check multiple xray tests in a single pytest test.

I tried using

 @pytest.mark.xray(["QSOL-1234", "QSOL-1235"])

and pytest-jira-xray does create a json file with a list, instead of a string, in the "testKey". Unfortunately, Xray refuses to parse this format, requiring a string only for the "testKey".

I propose a modification in the behavior of the library, so that when it is passed a list, it will create two "testKey" entries (probably with the same evidence) to appease Xray, while giving the flexibility to implement this use case.

I can implement this in the library, but before doing so I want the issue to be known to the author. Once it's been discussed, I can implement the feature. Please let me know.

Create new release

Is it possible to create a new release based on the recent changes on master?

Failing to publish results in JIRA Xray

Getting below error....

ERROR pytest_xray.xray_publisher:xray_publisher.py:91 HTTPError: Could not post to JIRA service at https:///rest/raven/2.0/import/execution. Response status code: 400
requests.exceptions.HTTPError: 400 Client Error: for url: https:///rest/raven/2.0/import/execution
ERROR pytest_xray.xray_publisher:xray_publisher.py:93 Error message from server: issuetype: issuetype

Defined below syntax in
@pytest.mark.xray('SECSOLTEST-291')

All environment variables are defined except client ID & Client secret key. Are these mandate?

Is it possible to custom the name of the pytest-jira-xray flag?

Not sure if this question belongs here, as this is more of a customer support type problem...
but,
Is it possible to custom change the flag --jira-xray to something like --xray-report? by custom I mean locally, only in our code repo, not in anyone else.
Thanks,
Adam

Bad Request on execution import

I just get the following error:

Could not publish results to Jira XRAY!
HTTPError: Could not post to JIRA service at https://example.com/jira/rest/raven/2.0/import/execution. Response status code: 400

when running the execution. The Base URL is fine, because I use it with official Jira python library. The code 400 means: Bad Request, how can I debug that?

Error while creating new test execution

Hi,

I'm using version 0.8.9 of pytest-jira-xray with following options:

--jira-xray --cloud --allow-duplicate-ids --client-secret-auth

and I get following error:

requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://xray.cloud.getxray.app/api/v2/import/execution
2023-09-08 12:45:01 ERROR xray_publisher.py _send_data: Error message from server: Error creating Test Execution - Issue create failed! - summary: You must specify a summary of the issue.

Not sure if it helps but from a quick debug I can see that pytest-jira-xray tries to send the request with following payload:

{'info': {'finishDate': '2023-09-08T10:59:09+0000', 'startDate': '2023-09-08T10:58:53+0000'}, 'tests': [{'status': 'PASSED', 'testKey': 'ITS-12120'}]}

Any ideas how to fix this?

multiple endpoints

I would like to have multiple TEST_EXECUTION_ENDPOINTs so that I can both save a json and send to cloud

The plugin has options --xraypath and --cloud but I cannot use them both in one pytest call.

reproduce:

pytest --xray-path=my_report.json --cloud test_something.py

actual behaviour

The result is not send to jira/xray cloud:

expected behaviour:

The result is both stored in my_report.json as well as being send to cloud.

Invalid header

Uploading my test results with pytest --jira-xray --client-secret-auth (as per instructions), I get

test_example.py ..                                                                                                                                                                                                      [100%]Traceback (most recent call last):
  File "/home/user/.pyenv/versions/repo/bin/pytest", line 8, in <module>
    sys.exit(console_main())
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/config/__init__.py", line 189, in console_main
    code = main()
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/config/__init__.py", line 166, in main
    ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_hooks.py", line 433, in __call__
    return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_manager.py", line 112, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_callers.py", line 116, in _multicall
    raise exception.with_traceback(exception.__traceback__)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_callers.py", line 80, in _multicall
    res = hook_impl.function(*args)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/main.py", line 317, in pytest_cmdline_main
    return wrap_session(config, _main)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/main.py", line 305, in wrap_session
    config.hook.pytest_sessionfinish(
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_hooks.py", line 433, in __call__
    return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_manager.py", line 112, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_callers.py", line 133, in _multicall
    teardown[0].send(outcome)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/terminal.py", line 857, in pytest_sessionfinish
    outcome.get_result()
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_result.py", line 108, in get_result
    raise exc.with_traceback(exc.__traceback__)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_callers.py", line 80, in _multicall
    res = hook_impl.function(*args)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pytest_xray/xray_plugin.py", line 180, in pytest_sessionfinish
    self.issue_id = self.publisher.publish(results)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pytest_xray/xray_publisher.py", line 129, in publish
    response_data = self._send_data(self.endpoint_url, self.auth, data)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pytest_xray/xray_publisher.py", line 100, in _send_data
    response = requests.request(
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/requests/api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/urllib3/connectionpool.py", line 398, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/urllib3/connection.py", line 244, in request
    super(HTTPConnection, self).request(method, url, body=body, headers=headers)
  File "/home/user/.pyenv/versions/3.8.10/lib/python3.8/http/client.py", line 1252, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/home/user/.pyenv/versions/3.8.10/lib/python3.8/http/client.py", line 1293, in _send_request
    self.putheader(hdr, value)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/urllib3/connection.py", line 224, in putheader
    _HTTPConnection.putheader(self, header, *values)
  File "/home/user/.pyenv/versions/3.8.10/lib/python3.8/http/client.py", line 1230, in putheader
    raise ValueError('Invalid header value %r' % (values[i],))
ValueError: Invalid header value b"Bearer <!DOCTYPE html><html lang=en><head><meta charset=utf-8><meta http-equiv=X-UA-Compatible content=IE=edge><title>Oops, you&#39;ve found a dead link. - JIRA</title><link type='text/css' rel='stylesheet' href='/static-assets/metal-all.css' media='all'><script src='/static-assets/metal-all.js'></script><meta name=decorator content=none /></head><body class= error-page error404><script type=text/javascript>document.body.className +=  js-enabled;</script><div id=page><header id=header role=banner></header><!-- #header --><section id=content role=main><div class=aui-page-panel><div class=aui-page-panel-inner><section class=aui-page-panel-content lowerContent><div id=error-state><span class=error-type></span><h1>Oops, you&#39;ve found a dead link.</h1><ul><li>Go back to the <a href=javascript:window.history.back()>previous page</a></li><li>Go to the <a href=/secure/MyJiraHome.jspa>Home Page</a></li></ul></div></section><!-- .aui-page-panel-content --></div><!-- .aui-page-panel-inner --></div><!-- .aui-page-panel --></section><!-- #content --><footer id=footer role=contentinfo><section class=footer-body>\n\n\n<ul class=atlassian-footer>\n    <li>\n        Atlassian JIRA <a class=seo-link href=https://www.atlassian.com/software/jira>Project Management Software</a>\n\n                                    \n                        \n        <span id=footer-build-information>(v1001.0.0-SNAPSHOT#100233-<span title='b4f309b7403c6ec1360911f686faf41e7d904ce2' data-commit-id='b4f309b7403c6ec1360911f686faf41e7d904ce2}'>sha1:b4f309b</span>)</span>\n    </li>\n    <li>\n        <a id=about-link href=/secure/AboutPage.jspa>About JIRA</a>\n    </li>\n    <li>\n                        <a id=footer-report-problem-link href=/secure/ContactAdministrators!default.jspa>Report a problem</a>\n    </li>\n</ul>\n<div id=footer-logo><a href=http://www.atlassian.com/ rel=nofollow>Atlassian</a></div></section></footer><!-- #footer --></div><!-- #page --></body></html>"

Is it possible to execute without using the marker @pytest.mark.xray and allow it to create a new test in xray within the project under a test plan?

Let's say there is no test case created yet in Xray Jira.

def test_page_load():
assert True

When we execute this, is it possible to create a new test in Xray JIRA with name "test_page_load" under the test plan provided?

Context : Lot of test cases prior to using Xray have already been automated. But now there is no binding to those test cases in Xray since they are not there in Jira. Rather than creating all of them manually in JIRA and providing their IDs in the mark.xray tag, is it possible for the library to create those test cases in Jira? and then we could use those IDs in the mark tag.

Start time is after finish time

When executing tests and exporting them to XRay, an error occurs as the Start date is set after the Finish date. This also happens when generating a .json file instead of directly exporting to the XRay and it confirmed when looking into the .json file that the "startDate" lies after the "finishDate", by one second.

{
"info": {
"startDate": "2023-02-08T12:40:25+0000",
"finishDate": "2023-02-08T12:40:24+0000"
}
(...)
}

Unable to override XRAY_EXECUTION_SUMMARY per specific test

We set the XRAY_EXECUTION_SUMMARY value in pytest.ini under env for example ,

env =
XRAY_EXECUTION_SUMMARY=Smoke Test

This applies to all test executions.

But is it possible to override this value per certain test?

I tried couple of things to override this value per test case

  1. @mock.patch.dict('os.environ', {'XRAY_EXECUTION_SUMMARY': 'Integration Test'})

@pytest.fixture(autouse=True)
def mock_settings_env_vars():
with mock.patch.dict(os.environ, {'XRAY_EXECUTION_SUMMARY': 'Integration Test'}):
yield

But the xray plugin has already taken the value from ini file and always picks "Smoke Test" as the Test Execution summary title.

Could not publish results to Jira XRAY!

We have integrated this plugin into the git pipeline and when all the tests have ran we are getting below error -
---------------------------------- Jira XRAY -----------------------------------
Could not publish results to Jira XRAY!
HTTPError: Could not post to JIRA service at https://xray.cloud.getxray.app/api/v2/import/execution. Response status code: 400

Below is the code used for the initial set-up on the Git yml file -

Set up Jira authentication for automatic Xray test execution update

  • export XRAY_API_BASE_URL=https://xray.cloud.getxray.app
  • export XRAY_CLIENT_ID=${XRAY_CLIENT_ID}
  • export XRAY_CLIENT_SECRET=${XRAY_CLIENT_SECRET}
  • pytest --alluredir report/allure/allure-results --jira-xray --execution TEST-xxxx --cloud

Also, inside the code we are using some as below -
@pytest.mark.xray(['TEST-xxx', 'TEST-xx'])
def test1():

@pytest.mark.xray('TEST-')
def test2():

Error publishing results to JIRA

I have gone through previous issues but I was not able to find the answer in any of them, that is why I am creating a new ticket again.

What am I using?

XRAY_API_BASE_URL=https://xxxxxxxxxxx.atlassian.net/
XRAY_API_USER="JIRA USER NAME"
XRAY_API_PASSWORD="JIRA Password"

This gives an error

DEBUG urllib3.connectionpool:connectionpool.py:1048 Starting new HTTPS connection (1): xxxxxxx.atlassian.net:443
DEBUG urllib3.connectionpool:connectionpool.py:546 https://avive.atlassian.net:443 "POST /api/v2/import/execution HTTP/1.1" 404 None
ERROR pytest_xray.xray_publisher:xray_publisher.py:125 HTTPError: Could not post to JIRA service at https://xxxxxxxx.atlassian.net/api/v2/import/execution. Response status code: 404

the v2/import/execution seems to be Xray API
https://docs.getxray.app/display/XRAYCLOUD/Import+Execution+Results+-+REST+v2


Trying

XRAY_API_BASE_URL=https://xray.cloud.getxray.app/
XRAY_API_USER="JIRA USER NAME"
XRAY_API_PASSWORD="JIRA Password"

ERROR pytest_xray.xray_publisher:xray_publisher.py:125 HTTPError: Could not post to JIRA service at https://xray.cloud.getxray.app/api/v2/import/execution. Response status code: 401


Trying

XRAY_API_BASE_URL=https://xray.cloud.getxray.app/
XRAY_API_USER="JIRA XRAY Client ID"
XRAY_API_PASSWORD="JIRA XRAY Client Password"

ERROR pytest_xray.xray_publisher:xray_publisher.py:125 HTTPError: Could not post to JIRA service at https://xray.cloud.getxray.app/api/v2/import/execution. Response status code: 401

Can someone help in configuring?
My Setup - I have JIRA and then purchased XRAY plugin to manage test cases.

Can't create new execution in XRAY cloud, using Jira client secret authentication

I'm trying post to xray a new execution but I get:
[xray_publisher.py][ERROR] Error message from server: Error creating Test Execution - Issue create failed! - Field Assignee is required.

I'm using Jira cloud and I'm running with inteliJ and the following arguments:
--jira-xray --cloud --client-secret-auth -o log_cli=true

Sending results for manual test results with steps

Hello! Is there any plan to implement sending results for manual test results with steps?
This is a simple example of a JSON file with execution results for a manual test.

{
  "info": {
    "startDate": "2023-01-12T14:09:31+0000",
    "finishDate": "2023-01-12T14:09:35+0000",
    "version": "1.5.0",
    "testEnvironments": [
      "installer"
    ],
    "summary": "\"Smoke Autotest\"",
    "description": "\"This is test 12/01/23\"",
    "testPlanKey": "TEST-9213"
  },
  "tests": [
    {
      "testKey": "TEST-9894",
      "status": "PASS",
      "comment": ""
    },
    {
      "testKey": "TEST-9214",
      "status": "FAIL",
      "comment": "text",
      "steps": [
        {
          "status": "FAIL",
          "actualResult": "text",
          "comment": "text"
        },
        {
          "status": "PASS",
          "actualResult": "text"
        }
      ]
    }
  ]
}

If you do not plan, I would be very grateful for a hint on how to do this using "def pytest_xray_results". My test looks like this:


import pytest
import allure
from allure_commons.types import AttachmentType
@allure.feature('Test_1')
@allure.story('Test_1-1')
@pytest.mark.xray('TEST-9894')
def test_jira_1(browser):
    with allure.step('Step 1'):
        try:
            assert True
        except:
            allure.attach(browser.get_screenshot_as_png(), name='error_screen', attachment_type=AttachmentType.PNG)
            raise
    with allure.step('Step 2'):
        try:
            assert True
        except:
            allure.attach(browser.get_screenshot_as_png(), name='error_screen', attachment_type=AttachmentType.PNG)
            raise
@allure.story('Test_2-2')
@pytest.mark.xray('TEST-9214')
def test_jira_2(browser):
    with allure.step('Step 1'):
        try:
            assert True
        except:
            allure.attach(browser.get_screenshot_as_png(), name='error_screen', attachment_type=AttachmentType.PNG)
            raise
with allure.step('Step 2'):
        try:
            assert False
        except:
            allure.attach(browser.get_screenshot_as_png(), name='error_screen', attachment_type=AttachmentType.PNG)
            raise

Thanks in advance for your reply

Alternative way to set test execution summary?

It seems strange that the required test execution summary is tied to an environment variable so that it is the same for every run unless I change the environment variable. Is there any way to pass it in as a flag or in a fixture or something?

Allow optionally to have duplicated identifiers

In some cases, the same Xray test may be stressed, in different ways or with different preconditions, from different pytest test functions.

Although it really depends on how the test, the xray wording and the user story is formulated, it may pay off to allow for duplication of xray identifiers, so that multiple pytest tests can be marked with the same jira xray identifier.

At the moment, there are two issues to solve:

  • it should probably not be the default. Instead, a switch flag should be added to allow for test duplication, instead of raising an exception.
  • error reports from each test must be packaged appropriately. Jira and xray expect unique testKey objects, and the comment must therefore package the result, and the associated logs from multiple tests. For example: if pytest test_A() and pytest test_B() both are marked with JIRA-1234, this id must probably be marked as FAIL if test_A passes and test_B fails.

ExitCode cannot be imported

When trying to run pytest --jira-xray --client-secret-auth after having set all the env vars, I get:

Traceback (most recent call last):
  File "/home/user/.pyenv/versions/repo/bin/pytest", line 8, in <module>
    sys.exit(main())
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/config/__init__.py", line 60, in main
    config = _prepareconfig(args, plugins)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/config/__init__.py", line 194, in _prepareconfig
    return pluginmanager.hook.pytest_cmdline_parse(
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_hooks.py", line 433, in __call__
    return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_manager.py", line 112, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_callers.py", line 133, in _multicall
    teardown[0].send(outcome)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/helpconfig.py", line 93, in pytest_cmdline_parse
    config = outcome.get_result()
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_result.py", line 108, in get_result
    raise exc.with_traceback(exc.__traceback__)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_callers.py", line 80, in _multicall
    res = hook_impl.function(*args)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/config/__init__.py", line 654, in pytest_cmdline_parse
    self.parse(args)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/config/__init__.py", line 871, in parse
    self._preparse(args, addopts=addopts)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/config/__init__.py", line 816, in _preparse
    self.pluginmanager.load_setuptools_entrypoints("pytest11")
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pluggy/_manager.py", line 364, in load_setuptools_entrypoints
    plugin = ep.load()
  File "/home/user/.pyenv/versions/3.8.10/lib/python3.8/importlib/metadata.py", line 77, in load
    module = import_module(match.group('module'))
  File "/home/user/.pyenv/versions/3.8.10/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
  File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/assertion/rewrite.py", line 296, in load_module
    six.exec_(co, mod.__dict__)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pytest_xray/plugin.py", line 24, in <module>
    from pytest_xray.xray_plugin import XrayPlugin
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/assertion/rewrite.py", line 296, in load_module
    six.exec_(co, mod.__dict__)
  File "/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/pytest_xray/xray_plugin.py", line 8, in <module>
    from _pytest.config import Config, ExitCode
ImportError: cannot import name 'ExitCode' from '_pytest.config' (/home/user/.pyenv/versions/repo/lib/python3.8/site-packages/_pytest/config/__init__.py)

Using plugin raises PytestUnknownMarkWarning

I get the following warnings:

PytestUnknownMarkWarning: Unknown pytest.mark.xray - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html

I always assumed that if the mark is declared in the plugin, you should not manually add it. Of course the workaround is trivial to do, but either I misunderstood how pytest should behave with plugin-provided markers, or something is preventing pytest from discovering the marker from the plugin.

add suppport for Xray on Jira Cloud

As many teams use Xray Cloud (i.e. Xray for Jira Cloud), supporting it would increase the range users benefiting from this library.
I can try to make a PR for it, whenever possible.

Submit a test that does not exist. What's the outcome?

I am trying to understand from the general code and xray endpoint documentation what happens if I create a test with an identifier (e.g. PROJ-1234) that is not assigned to a test (e.g. PROJ-1234 is a Story) or is not existent at all (e.g. PROJ-1234 does not exist).

Unfortunately I can't mess with my current Jira and run an experiment, but a colleague informed me that the xray endpoint might create an empty xray test automatically. Is this actually true, and if so, is there some mechanism in the plugin to prevent this from happening (that is, only accept the submission if all tests exist)

Support of parameterized test

It would be nice if parameterized tests would be supported as well.
So that tests which got executed in multiple iterations with different parameters as documented here.
An example you'll find below under key iterations

	"id": 10603,
	"status": "EXECUTING",
	"color": "#F1E069",
	"testKey": "ID-59",
	"testExecKey": "ID-60",
	"assignee": "User1",
	"executedBy": "User1",
	"startedOn": "2023-11-16T08:00:21+01:00",
	"startedOnIso": "2023-11-16T08:00:21+01:00",
	"defects": [],
	"evidences": [],
	"testEnvironments": [],
	"fixVersions": [],
	"customFields": [],
	"iterations": [
		{
			"id": 3,
			"status": "PASS",
			"color": "#95C160",
			"parameters": [
				{
					"name": "Param1",
					"value": "1"
				},
				{
					"name": "Param2",
					"value": "A"
				}
			]
		},
		{
			"id": 4,
			"status": "TODO",
			"color": "#A2A6AE",
			"parameters": [
				{
					"name": "Param1",
					"value": "2"
				},
				{
					"name": "Param2",
					"value": "B"
				}
			]
		}
	]
}

Pushing multiple test executions when executing tests in parallel using pytest-xdist

Hi there,

I have identified that whenever I execute test cases using the plugin pytest-xdist, more than one x-ray test execution will be sent to JIRA.

One test execution for each worker and one at the end with all the results together.

In the logs I can see only a message of one test execution created, but actually it creates more:

 Uploaded results to JIRA XRAY. Test Execution Id: VSDS-19800

The ticket number shown is actually the last execution created.

Here are the versions I use:

$ python --version
Python 3.10.0
$ pip freeze | grep pytest
pytest==7.2.0
pytest-cov==3.0.0
pytest-html==3.1.1
pytest-jira-xray==0.8.6
pytest-metadata==2.0.1
pytest-xdist==3.3.1

Does anyone know if there is a specific configuration needed to avoid this behaviour? I couldn't find anything related to that in the documentation.

Many thanks

Getting Duplicate Id for scenario outline

I am trying to execute and update results for scenario outline in JIRA-xray. but its giving "dupplicate ids" error. does pytest-jira-xray provides direct handling for scenario outline ?

Index Error while using @pytest.mark.xray decorator

INTERNALERROR>           ^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR>   File "/Users/ronaldo/openC/ui-testspom/Tests_Bamboo_v2/venvBamboo/lib/python3.12/site-packages/pytest_xray/xray_plugin.py", line 170, in pytest_collection_modifyitems
INTERNALERROR>     self._verify_jira_ids_for_items(items)
INTERNALERROR>   File "/Users/ronaldo/openC/ui-testspom/Tests_Bamboo_v2/venvBamboo/lib/python3.12/site-packages/pytest_xray/xray_plugin.py", line 84, in _verify_jira_ids_for_items
INTERNALERROR>     test_keys = self._get_test_keys(item)
INTERNALERROR>                 ^^^^^^^^^^^^^^^^^^^^^^^^^
INTERNALERROR>   File "/Users/ronaldo/openC/ui-testspom/Tests_Bamboo_v2/venvBamboo/lib/python3.12/site-packages/pytest_xray/xray_plugin.py", line 70, in _get_test_keys
INTERNALERROR>     if isinstance(marker.args[0], str):
INTERNALERROR>                   ~~~~~~~~~~~^^^
INTERNALERROR> IndexError: tuple index out of range
@pytest.mark.xray([
    'AR-7645',
    'AR-7406'
])
def test_asd():

The mark.xray getting exception it doesnt work with list nor with single string.

@pytest.mark.xray(
    'AR-7645')

there is an issue in the method in class xray_plugin

   def _get_test_keys(self, item: Item) -> List[str]:
        """Return JIRA ids associated with test item"""
        test_keys: List[str] = []
        marker = self._get_xray_marker(item)

        if not marker:
            return test_keys

        if isinstance(marker.args[0], str):
            test_keys = [marker.args[0]]
        elif isinstance(marker.args[0], list):
            test_keys = list(marker.args[0])
        else:
            raise XrayError('xray marker can only accept strings or lists')
        return test_keys

Add the possibility to set custom JIRA web server truststore

I am getting this error when executing my pytests :
OpenSSL.SSL.Error: [('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')]

Could you please allow the possibility to set the truststore of the JIRA web server ?

It coulb be a new environmental variable like :
export XRAY_API_BASE_TRUSTSTORE=</path/to/PEM file> or False

Then add the truststore parameter in the requests.request call

   def publish_xray_results(self, url: str, auth: AuthBase, data: dict) -> dict:
        headers = {'Accept': 'application/json',
                   'Content-Type': 'application/json'}
        data = json.dumps(data)
        try:
            response = requests.request(method='POST', url=url, headers=headers, data=data, auth=auth, verify=truststore)

Import result error x-ray cloud when using test title

The test cases I write need to have a title to be displayed on the html report file and hence I add the description inside the function between ''' ......''' like mentioned below.

@pytest.mark.xray('ABC-704')
def test_foo():
'''Check Login Functionality'''
.......
assert True

I have a hook that appends that summary as the title of the test in report

@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
test_fn = item.obj
docstring = getattr(test_fn, '--doc--')
if docstring:
report.nodeid = docstring

This will append that information as title of the test case in the report which is more readable rather than displaying function name.

But if I use this then the API is not able to import. I get the error :

Could not publish results to Jira XRAY!
HTTPError: Could not post to JIRA service at https://xray.cloud.getxray.app/api/v2/import/execution. Response status code: 400
Error message from server: Result is not valid Xray Format

The error is gone and the execution result is imported fine when I remove the '''....''' .

Is there a way to allow this and still be able to import ?

Reporting the test results of one pytest using different configs to different Tests in XRAY

Lets say I have a pytests like

def test_foo():
    assert foo(3) == 4

and a test that is being parametrized like

@pytest.mark.parametrize("a, b, expected", [
    (1, 2, 3),      # Test case 1: 1 + 2 = 3
    (0, 0, 0),      # Test case 2: 0 + 0 = 0
])
def test_add(a, b, expected):
    assert add(a, b) == expected

For both types of tests it seems sufficient for now to only report the overall result to XRAY, marking above tests with the Test(XRAY) ID and then use pytest --jira-xray --execution TestExecutionId to upload results to existing test execution.

If there is now the need to run test_foo using different configs like
pytest test_foo --config configA --jira-xray --execution TestExecutionId
and then
pytest test_foo --config configB --jira-xray --execution TestExecutionId
would it be possible to report the test result to two different Test(XRAY) IDs?
Potentially having --test cmd line option or in some other way?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.