Git Product home page Git Product logo

schemathesis's Introduction

Schemathesis: catch crashes, validate specs, and save time

Build Coverage Version Python versions Discord License


Documentation: https://schemathesis.readthedocs.io/en/stable/

Chat: https://discord.gg/R9ASRAmHnA


Why Schemathesis?

Schemathesis is a tool that automates your API testing to catch crashes and spec violations. Built on top of the widely-used Hypothesis framework for property-based testing, it offers the following advantages:

πŸ•’ Time-Saving

Automatically generates test cases, freeing you from manual test writing.

πŸ” Comprehensive

Utilizes fuzzing techniques to probe both common and edge-case scenarios, including those you might overlook.

πŸ› οΈ Flexible

Supports OpenAPI, GraphQL, and can work even with partially complete schemas. Only the parts describing data generation or responses are required.

πŸŽ›οΈ Customizable

Customize the framework by writing Python extensions to modify almost any aspect of the testing process.

πŸ”„ Reproducible

Generates code samples to help you quickly replicate and investigate any failing test cases.

Quick Demo

Schemathesis Demo

With a summary right in your PRs:

image

Getting Started

Choose from multiple ways to start testing your API with Schemathesis.

πŸ’‘ Your API schema can be either a URL or a local path to a JSON/YAML file.

πŸ’» Command-Line Interface

Quick and easy for those who prefer the command line.

Python

  1. Install via pip: python -m pip install schemathesis
  2. Run tests
st run --checks all https://example.schemathesis.io/openapi.json

Docker

  1. Pull Docker image: docker pull schemathesis/schemathesis:stable
  2. Run tests
docker run schemathesis/schemathesis:stable
   run --checks all https://example.schemathesis.io/openapi.json

🐍 Python Library

For more control and customization, integrate Schemathesis into your Python codebase.

  1. Install via pip: python -m pip install schemathesis
  2. Add to your tests:
import schemathesis

schema = schemathesis.from_uri("https://example.schemathesis.io/openapi.json")


@schema.parametrize()
def test_api(case):
    case.call_and_validate()

πŸ’‘ See a complete working example project in the /example directory.

:octocat: GitHub Integration

GitHub Actions

Run Schemathesis tests as a part of your CI/CD pipeline.

Add this YAML configuration to your GitHub Actions:

api-tests:
  runs-on: ubuntu-22.04
  steps:
    - uses: schemathesis/action@v1
      with:
        schema: "https://example.schemathesis.io/openapi.json"
        # OPTIONAL. Add Schemathesis.io token for pull request reports
        token: ${{ secrets.SCHEMATHESIS_TOKEN }}

For more details, check out our GitHub Action repository.

πŸ’‘ See our GitHub Tutorial for a step-by-step guidance.

GitHub App

Receive automatic comments in your pull requests and updates on GitHub checks status. Requires usage of our SaaS platform.

  1. Install the GitHub app.
  2. Enable in your repository settings.

Software as a Service

Schemathesis CLI integrates with Schemathesis.io to enhance bug detection by optimizing test case generation for efficiency and realism. It leverages various techniques to infer appropriate data generation strategies, provide support for uncommon media types, and adjust schemas for faster data generation. The integration also detects the web server being used to generate more targeted test data.

Schemathesis.io offers a user-friendly UI that simplifies viewing and analyzing test results. If you prefer an all-in-one solution with quick setup, we have a free tier available.

How it works

Here’s a simplified overview of how Schemathesis operates:

  1. Test Generation: Using the API schema to create a test generator that you can fine-tune to your testing requirements.
  2. Execution and Adaptation: Sending tests to the API and adapting through statistical models and heuristics to optimize subsequent cases based on responses.
  3. Analysis and Minimization: Checking responses to identify issues. Minimizing means simplifying failing test cases for easier debugging.
  4. Stateful Testing: Running multistep tests to assess API operations in both isolated and integrated scenarios.
  5. Reporting: Generating detailed reports with insights and cURL commands for easy issue reproduction.

Research Findings on Open-Source API Testing Tools

Our study, presented at the 44th International Conference on Software Engineering, highlighted Schemathesis's performance:

  • Defect Detection: identified a total of 755 bugs in 16 services, finding between 1.4Γ— to 4.5Γ— more defects than the second-best tool in each case.

  • High Reliability: consistently operates seamlessly on any project, ensuring unwavering stability and reliability.

Explore the full paper at https://ieeexplore.ieee.org/document/9793781 or pre-print at https://arxiv.org/abs/2112.10328

Testimonials

"The world needs modern, spec-based API tests, so we can deliver APIs as-designed. Schemathesis is the right tool for that job."

Emmanuel Paraskakis - Level 250

"Schemathesis is the only sane way to thoroughly test an API."

Zdenek Nemec - superface.ai

"The tool is absolutely amazing as it can do the negative scenario testing instead of me and much faster! Before I was doing the same tests in Postman client. But it's much slower and brings maintenance burden."

LudΔ›k NovΓ½ - JetBrains

"Schemathesis is the best tool for fuzz testing of REST API on the market. We are at Red Hat use it for examining our applications in functional and integrations testing levels."

Dmitry Misharov - RedHat

"There are different levels of usability and documentation quality among these tools which have been reported, where Schemathesis clearly stands out among the most user-friendly and industry-strength tools."

Testing RESTful APIs: A Survey - a research paper by Golmohammadi, at al

Contributing

We welcome contributions in code and are especially interested in learning about your use cases. Understanding how you use Schemathesis helps us extend its capabilities to better meet your needs.

Feel free to discuss ideas and questions through GitHub issues or on our Discord channel. For more details on how to contribute, see our contributing guidelines.

Let's make it better together 🀝

Your feedback is essential for improving Schemathesis. By sharing your thoughts, you help us develop features that meet your needs and expedite bug fixes.

  1. Why Give Feedback: Your input directly influences future updates, making the tool more effective for you.
  2. How to Provide Feedback: Use this form to share your experience.
  3. Data Privacy: We value your privacy. All data is kept confidential and may be used in anonymized form to improve our test suite and documentation.

Thank you for contributing to making Schemathesis better! πŸ‘

Commercial support

If you're a large enterprise or startup seeking specialized assistance, we offer commercial support to help you integrate Schemathesis effectively into your workflows. This includes:

  • Quicker response time for your queries.
  • Direct consultation to work closely with your API specification, optimizing the Schemathesis setup for your specific needs.

To discuss a custom support arrangement that best suits your organization, please contact our support team at [email protected].

Additional content

Papers

Articles

Videos

License

This project is licensed under the terms of the MIT license.

schemathesis's People

Contributors

aexvir avatar barrett-schonefeld avatar bo5o avatar chr1st1ank avatar dependabot[bot] avatar devkral avatar dongfangtianyu avatar egorlutohin avatar fallion avatar hebertjulio avatar hultner avatar huwcbjones avatar jamescooke avatar kianmeng avatar mdavis-xyz avatar paveldedik avatar prayags avatar ronnypfannschmidt avatar sstremler avatar stannum-l avatar stegayet avatar stranger6667 avatar svtkachenko avatar thebigroomxxl avatar tstrozyk avatar tuckerwales avatar tuffnatty avatar vikahl avatar wescran avatar zhukovgreen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

schemathesis's Issues

Support for formData

Add support for formData parameters type.

https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#parameterIn

Implementation could include these steps:

  • Adding a condition branch in schemas.SwaggerV20.process_parameter for formData type
  • Adding a corresponding method for handling this parameter type - there is nothing specific, and it could be similar to process_query
  • Extend PreparedParameters, Endpoint, Case classes with form_data attribute of Dict[str, Any] type.
  • Extend generator.PARAMETERS and generator.get_case_strategy accordingly (it should be similar to other parameters)

Better deselecting support for pytest

It would be nice to be able to filter out test items by their full name, including the part added be schemathesis. However this behavior is not tested

Implement required/optional parameters check

The current behavior is based on having data that matches the given schema. But we need to go beyond this point:

Check if required parameters are really required; Should fail if some param is actually optional;
Check if optional parameters are really optional; Should fail if some param is actually required;
One of the possible ways to achieve this is to adjust the schema in runtime before creating a strategy for it. For example (1) we need to change the schema and remove required parameters or its subset completely, for (2) removing the value should not lead to failure.

Mentioned approaches could be extracted on a separate level - we could have one extra parametrization level that will include these things:

  • Current. Test data matches the schema; No runtime schema changes;
  • (1) Test data doesn't match the schema - required fields are not present and the endpoint is expected to fail because of this reason. Required fields are removed.
  • (2) Test data matches the schema - missing the fields should not lead to an error. Optional fields are removed.

We could produce these 3 dimensions by default:

  • method
  • endpoint
  • test type (from the mentioned above)

This will expand the scope of problems Schemathesis can verify

Pass endpoint / method to CLI

BaseSchema has method and endpoint attributes, used to filter tests only for the specific method/endpoint combination. It would be nice to have it in CLI as well (could be passed to from_uri function)

Unittest support

It should be possible to run schemathesis without pytest. In this case it will be easier to use Schemathesis in projects that don't use pytest

Real network test cases executor

We need to have some runner abstraction that will be able to execute the given test cases against certain target: URL or WSGI app instance.

For real network case, it would be nice to execute the cases asynchronously for the speed reason. For now, we need only real network executor, WSGI one could be done separately.

Input: Base URL

Output: List of responses. Probably it shouldn't be aiohttp-specific response, might be a simplified structure so alternative executors could make them in the future.

Also, it could be mapping - Case: Response, might be better for the future processing

The core logic here is to convert Case into kwargs for aiohttp so it could make a proper network request

Release, please :)

Getting error in 0.4.1 on test collection:

.../site-packages/schemathesis/extra/pytest_plugin.py:53: in collect
    filter_endpoint=self.schemathesis_case.filter_endpoint,
.../site-packages/schemathesis/extra/pytest_plugin.py:50: in <listcomp>
    item
.../site-packages/schemathesis/schemas.py:88: in get_all_endpoints
    prepared_parameters = self.get_parameters(parameters, definition)
.../site-packages/schemathesis/schemas.py:102: in get_parameters
    self.process_parameter(result, parameter)
.../site-packages/schemathesis/schemas.py:117: in process_parameter
    self.process_body(result, parameter)
.../site-packages/schemathesis/schemas.py:131: in process_body
    if body["type"] == "object":
E   KeyError: 'type'

As I can see, it was fixed in #50 (thanks to mutmut). When the release is going?

Process the execution results

Given the list of responses / mapping case-response we need to process it and create some summary from it. We can start with a simple report:

  • Number of successful responses
  • Number of failed responses

The output could be a simple dictionary, for now, that could be formatted later into stdout text representation or to some JSON.

We need the summary to measure how many failures are there + if we will provide some additional info (e.g. on what input happened certain error), then it will help to fix the issues itself

Generate test cases from the given parameters

We need to create N instances of Case from the given parameters, it could be done by creating a function wrapped into given call that will add all cases into a list from the higher scope. Code snippet to illustrate the idea:

def generate_cases(*args, **kwargs): 
    data = [] 
    @given(i=st.integers()) 
    def generator(i): 
        data.append(i) 
    generator() 
    return data

Or it could be a method on Parametrizer for example:

class Parametrizer:
    ...
    def generate_cases(self, **kwargs):
        ...

Pytest plugin generates different tests for endpoints / methods combos and then wrap them into given - maybe this logic could be factored out from the plugin itself, not sure, but we need to gather cases from all these combos

Support for "cookie" parameter in OAS3

Implementation could include these steps:

  • Adding a condition branch in schemas.OpenApi30.process_parameter for cookie type
  • Adding a corresponding method for handling this parameter type - there is nothing specific, and it could be similar to process_query
  • Extend PreparedParameters, Endpoint, Case classes with cookies attribute of Dict[str, Any] type.
  • Extend generator.PARAMETERS and generator.get_case_strategy accordingly (it should be similar to other parameters)

Add support for filtering in loaders

It could be helpful to instantiate a schema like this:

# test_users_api.py
schema = schemathesis.from_path("path.yaml", endpoint="/v1/users", method="GET")
...

E.g. to separate tests in different modules

Refactor project structure

At the moment some structural problems could be addressed, to simplify further development:

  • Duplication between PreparedParameters and Endpoint, there could be only Endpoint;
  • Duplication between SchemaWrapper and BaseSchema subtypes, the wrapper doesn't do much, except for lazy schema loading, which could be done in BaseSchema directly and remove one indirection level;
  • There are too many tests in test_parametrization.py - they could be decomposed into smaller modules;
  • The whole tests structure could be more intuitive, to denote certain areas - values generation, petstore tests, tests for certain parameters types, etc
  • BaseSchema maybe could be re-written in more functional style to simplify it, not sure yet;
  • Readers itself could be closer to `Parametrizer, now it is proxying calls there (not sure if it will make sense);
  • Types could be simplified;
  • Overall project structure could be more generic - filters from schemas could be separated;

Specify line length for Flake8

Now the project follows 120 chars limit for the line, I guess. However, Flake8's default is 79. Let's explicitly specify it.

Optional lazy load for scheme

Our use case: we have no existing file with the specification, only apispec. We can generate docs from it, but:

  • Schemathesis must generate test cases BEFORE running any tests, on the stage of tests collecting.
  • Flask app initialization happens in a fixture, before running the first test.

I thought about laziness for test cases generation but looks like there is no way to do it. Maybe, only some kind of subtests inside existing test πŸ€”

My decision, for now, is keeping schemathesis tests as a separate script, outside of main tests. However, it makes difficult to run it on CI: you have to run the dev server on background, and only after that run the script and test swagger spec.

Let's think, how to make test cases generation lazy. If it's possible on existing implementation (somehow nest test generation into test function, and don't fail on the first assert), code example somewhere in docs would be very helpful.

Some ideas:
#36 (comment)

Ability to pass parameters directly

When testing some endpoint that implies having some id in the path or any other part:

/api/items/{item_id}

It will be helpful to put real item_id into parameters if it is known before the test:

@schema.parametrize(path_parameters={"item_id": 42})
def test_items(case):
    case.path_parameters  # {"item_id": 42}

Under the hood, these parameters could be passed as st.just into the case strategy, be available in the case fixture.

Goals:

  • Improve the usefulness of generated test cases. Some endpoints require pre-defined parameters to work (otherwise it could be almost always 404 or similar);
  • Improve performance, since just is faster than from_schema (though in some cases it could be slower)

Alternative - all parameters are named (kinda, since requestBody is a separate one), we can pass the something like this to parametrize:

@schema.parametrize(overrides={"item_id": st.just(42)})
def test_items(case):
    ...

In this case, the data generation is more flexible - we can generate whatever we need with Hypothesis strategies

Implementation notes:

  • Check if the parameter name is valid and exists in the schema
  • During Open API -> JSON Schema translation do not include parameters, that are in the "overrides" dictionary
  • Create a dynamic st.composite strategy that draws from the reduced schema first and then draws from the provided strategies.

Error during authorization via token

When my endpoint requires authorization through the token that is sent via header, I get the following error:
requests.exceptions.InvalidHeader: Invalid return character or leading space in header: Authorization
Does the program work for "closed" endpoints?

CLI

A CLI tool that will:

  • generate test cases from the given schema (probably with an option to show them in stdout);
  • execute these use cases against certain URL (if not specified in the schema);
  • Pass certain config options to the relevant flow part (test generation / execution);

Possible examples:

Generate cases:

schemathesis -s https://example.com/api/swagger.json --format=json

Execute:

schemathesis run https://example.com/api/swagger.json

Support x-nullable extension for swagger

From https://help.apiary.io/api_101/swagger-extensions/#x-nullable:

As it is not possible to declare null as an additional type to schemas in Swagger 2 this brings a limitation where you cannot define types which are nullable. In OAS3 this limitation is removed with the introduction of the nullable property which when set to true allows null to be a value alongside the original value(s) or type(s). This feature is backported to our Swagger 2 parser as a vendored extension x-nullable.

For example, to declare a schema for a type that may be a string or null:

type: string
x-nullable: true

x-nullable may also be used in conjunction with enumerations. In the below example the schema represents that the permitted values are either one of the strings north, east, south, west or null:

enum:
  - north
  - east
  - south
  - west
x-nullable: true

It will be a valuable addition to support it since it is quite popular (at least in kiwi.com). It could be done with transforming relevant schema to correct JSON schema representation, for example with anyOf

{
  "anyOf": [
    { "type": "string", "maxLength": 5 },
    { "type": "null"}
  ]
}

Add tests for invalid schema & strategies

E.g. if the schema is a list instead of dict - we should mark the test as an error.

And if the strategy has an error during evaluation (e.g. some deferred strategy from hypothesis_json_schema), then the test should be marked as errored as well

De-duplicate generated test cases

A list of generated test cases could contain duplicates because of hypothesis behavior, and we don't have to run the same cases multiple times since we are out of the hypothesis environment - we run cases manually, there is no flaky tests detection or data shrinking.

More info about duplicated cases: HypothesisWorks/hypothesis#2087 (comment)

Could be done like this:

@attr.s(slots=True, hash=False)
class Case:
    """A single test case parameters."""

    ...

    def __hash__(self):
        serialize = partial(json.dumps, sort_keys=True)
        return hash(
            (
                self.path,
                self.method,
                serialize(self.path_parameters),
                serialize(self.headers),
                serialize(self.query),
                serialize(self.body),
            )
        )

So we can add Case instances to set to eliminate duplicates, or we can maintain a list with cases and verify each case with in check:

deduplicated = []
for case in cases:
    if case not in deduplicated:
        deduplicated.append(case)

This could be faster on small amounts of cases since there will be no need to calculate hashes and the Case class should be compared for equality by value, not by identity. On big amounts, it will the complexity depends linearly on the list size since it could require N comparisons.

More info - http://www.attrs.org/en/stable/hashing.html

Runner. Provide a way to setup authorization

At the moment there is no way to setup auth in runner._execute_all_tests. But it should be configurable.
We can start with these options:

  • basic auth (for CLI we can follow cURL convention --user option)
  • custom header (we can add again, a cURL convention --header) so users can add auth header manually

As the next step, it would be nice to have more flexible code in runner via e.g. passing arguments to the requests.Session or auth callbacks to solve the case when a token expires (and it should be refreshed)

Ability to register strategies for custom string formats

E.g. we have a format card_number that involves Luhn algorithm validation - it would be nice to register custom hypothesis strategies for such cases. Most probably it could be solved via updating this dictionary - https://github.com/Zac-HD/hypothesis-jsonschema/blob/master/src/hypothesis_jsonschema/_impl.py#L637

On the top level it could look like this schemathesis.register_string_format(name: str, strategy: st.SearchStrategy)

Such customization options will improve the quality of generated test cases (however we need to pass invalid values as well, e.g. without format or pattern to verify the contrapositive assumption - invalid cases will be rejected)

Add code examples

It will be extremely helpful to have some code examples , so it will be easier to start using Schemathesis.

  • Straightforward test. the whole api, without filtration or app insights
  • Filters for specific endpoints
  • Tests with insights about the app data (id of some existing DB object for GET / PUT calls)
  • Using subtesthack to unlock pytest fixtures per hypothesis test

Test case executor for WSGI apps

Similar to what Flask has for testing - we don't have to use requests, we can work with the application directly. For this, we need to make this component interchangeable and write a one that will be able to execute schemathesis tests against an arbitrary WSGI app.

In general, this approach should be much faster than using a real network. The executor API design is up to discussion (feel free to propose any).

The main point of the issue is to simplify Schemathesis adoption - it is easier and faster to use a WSGI instance in certain frameworks like Flask or Django.

The implementation could be adopted from existing test clients, for example from Flask.

https://github.com/pallets/flask/blob/master/src/flask/testing.py#L115

Or, maybe we can use werkzeug directly - https://github.com/pallets/werkzeug/blob/master/src/werkzeug/test.py#L768

PEP: https://www.python.org/dev/peps/pep-3333/

Check alternative base paths

In OAS 3 it could be specified per server:

servers:
  - url: http://localhost:8080/{basePath}
    description: application
    variables:
      basePath:
        default: api

Currently, Schemathesis only uses basePath from the schema root, which could not work on some OAS schemas

If we will automate this, then it will reduce the number of actions needed to run the tests, currently, the base URL should be set manually in a test case

Application-side instrumentation

To fix problems discovered with schemathesis (ST) we need to get maximum useful info about the error with minimal interference with the application under test behavior.

To get this info we can implement some instrumentation code, which will be set up on the application side.

Similar to sentry_sdk - different plugins can hook into certain web frameworks and send info to the main ST process.
The core part of such SDK will use a transport like ZMQ PUSH socket to send the context info (at least a stack trace), at the same time the main ST process will use PULL socket to gather the info from the app and then show the output results on the main ST process side.

The output format is TBD, but it could include the input data, URL, headers, running time and the exception trace.

Currently, to fix the issues in our apps I have to change them in a way that the exception is returned in the web response, however, the app changes should be minimal and the response shouldn't be changed at all. Alternatively, it could be logged, but I need to gather them manually after, it is ok on the localhost, but maybe not that convenient in other cases. Also, having this info on the ST process side will simplify the whole pipeline - after receiving all the data we can clean it up, remove duplicates, cluster errors, etc.

The Python SDK, could live separately, to have minimal requirements and support more Python versions than Schemathesis supports.

In the future, the application-side data could include coverage info per case to guide test data generation (as libFuzzer does)

Use random port for runner tests

The hard coded port causes problems with parallel tox runs tox -p all - the port could be already in used and tests will fail. It is a small improvement but will be helpful.

There is a fixture in aiohttp which probably could be used - aiohttp_unused_port

Possible conflict in filtration by method / endpoint

for example, there is a lazy schema that contains some filters by method/endpoint (should be created directly, not via a loader) in this case if method / endpoint is not set in parametrize arguments then those filters in the schema will be overridden with None, which is not what could be expected.
Not that relevant at the moment, but could be more important when we'll allow adding filters to loaders

Update docs

The whole workflow should be clear. Now we miss:

  • how to run the tests
  • filters for URL or endpoint
  • roadmap info
  • examples of errors that could be found
  • maybe an example of bugged schema & app, so users could run examples directly in the repo
  • FAQ - e. g why hypothesis generates more sometimes
  • proper links to other projects
  • possible workflows - as a part of a test suite, CI step, CLI (for the future reference)
  • how to use subtesthack for pytest fixtures
  • more? Non-python use cases

Negative testing

The current approach produces data that matches the schema. To verify that wrong data will not be processed without and error we need to mutate the schema in runtime before creating a strategy in the way that it will not match the initial schema (e.g. it shouldn't be a subset of the initial schema). Could be another "test_type" option.

With this feature, we will expand the scope of errors that could be discovered by Schemathesis

Runner. Execute tests for different endpoints in parallel

Now, runner module executes tests sequentially, but for each method/endpoint combo they could be executed in parallel. However, apparently it could be done in separate processes since hypothesis has some internal state that prevents running it in different threads / same thread (via asyncio). The network part could be done async, but not sure how to design it in this way - I did two attempts but failed because off hitting deadline in hypothesis (maybe disabling the deadline setting will help, not sure) and didn't properly try different processes. Also, it would be nice to consult with hypothesis core devs

How to use `hypothesis.settings` with `schemathesis.from_pytest_fixture`

Example from readme:

from hypothesis import settings

@settings(max_examples=5)
def test_something(client, case):
    ...

My code:

schema = schemathesis.from_pytest_fixture("swagger_schema")

@hypothesis.settings(max_examples=1)
@schema.parametrize()
def test_swagger_endpoints(client, case):
    ...

On tests run:

.../site-packages/schemathesis/lazy.py:19: in test
    schema = get_schema(request, self.fixture_name, method, endpoint)
.../site-packages/hypothesis/_settings.py:253: in new_test
    "Using `@settings` on a test without `@given` is completely pointless."
E   hypothesis.errors.InvalidArgument: Using `@settings` on a test without `@given` is completely pointless.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.