Git Product home page Git Product logo

openactive-test-suite's Introduction

OpenActive Test Suite Reference Implementation

To join the conversation, we're on the OpenActive Slack at #openactive-test-suite.

The general aim of this project is to allow end to end testing of the various flows and failure states of the Open Booking API.

Running npm start in the root will run the OpenActive Test Suite, which is actually comprised of these packages:

  • Integration Tests: this performs automated tests against the API.
  • Broker Microservice: this sits in between the test suite and the target Open Booking API implementation. This allows the integration tests to watch for changes to the various RPDE feeds.
  • OpenID Test Client: this connects to the target Open Booking API's OpenID Provider. This allows the Broker and Integration tests to authorize with the implementation
  • Test Interface Criteria: this allows test suite to tailor specific opportunities to specific tests by implementing the OpenActive Test Interface Criteria.

Usage

Running npm start will orchestrate running the Broker Microservice and the Integration Tests in order to test your Open Booking API implementation.

Note that the implementation under test will need to implement the OpenActive Test Interface to run in controlled mode, and for selected tests.

Quick start

You can check that the test suite works in your local environment by running it against the hosted OpenActive Reference Implementation, simply by using the default configuration:

git clone [email protected]:openactive/openactive-test-suite.git
cd openactive-test-suite
npm install
npm start -- core

Note that the above command only runs the "core" tests within the test suite, which should take around 60 seconds to complete.

The hosted OpenActive Reference Implementation is running on a basic developer tier Azure instance with a burst quota, so it will not handle the load of a test suite run for all tests (hence npm start -- core); if the hosted application shuts down, simply wait 5 minutes and try again.

Configuration

In order to run the test suite against your own implementation, configure the test suite by creating a copy of config/default.json named config/{NODE_ENV}.json (where {NODE_ENV} is the value of your NODE_ENV environment variable), including the following properties:

The test suite uses the file config/{NODE_ENV}.json to override the settings in default.json. For development and deployment create a new file instead of making changes to the default.json file, so that any new required settings that are added in future versions can be automatically updated in default.json.

For more information about this use of NODE_ENV see this documentation.

By convention, much of the documentation assumes you to have created a config/dev.json file, which Test Suite will use when env var NODE_ENV=dev. But you can use any name you like, and have multiple configuration files for different environments.

Configuration for sellers within ./config/{NODE_ENV}

The primary Seller is used for all tests, and random opportunities used when "useRandomOpportunities": true are selected from this Seller. The secondary Seller is used only for multiple-sellers tests.

An example, using OpenID Connect Authentication:

  "sellers": {
    "primary": {
      "@type": "Organization",
      "@id": "https://reference-implementation.openactive.io/api/identifiers/sellers/0",
      "authentication": {
        "loginCredentials": {
          "username": "test1",
          "password": "test1"
        }
      },
      "taxMode": "https://openactive.io/TaxGross",
      "paymentReconciliationDetails": {
        "name": "AcmeBroker Points",
        "accountId": "SN1593",
        "paymentProviderId": "STRIPE"
      }
    },
    "secondary": {
      "@type": "Person",
      "@id": "https://reference-implementation.openactive.io/api/identifiers/sellers/1",
      "authentication": {
        "loginCredentials": {
          "username": "test2",
          "password": "test2"
        }
      },
      "taxMode": "https://openactive.io/TaxNet"
    }
  }

Description of each field:

  • authentication: Check out the Configuration for Seller Authentication section.

  • taxMode: Which Tax Mode is used for this Seller.

    Note: If testing both Tax Modes, make sure that there is at least one Seller with each. Alternatively, if not supporting multiple Sellers, you can run the Test Suite once with "taxMode": "https://openactive.io/TaxNet" and once with "taxMode": "https://openactive.io/TaxGross". However, it is not currently possible to generate a certificate that covers both configurations unless multiple Sellers are supported.

  • paymentReconciliationDetails: If testing Payment Reconciliation Detail Validation, include the required payment reconciliation details here.

Configuration for Seller Authentication

In order to make bookings for a specific Seller's Opportunity data, some kind of authentication is required to ensure that the caller is authorized to make bookings for that Seller.

Test Suite allows for a few different options for Seller Authentication. This determines the data to put in the authentication field for each Seller:

OpenID Connect

View Spec

You'll need the username/password that the Seller can use to log in to your OpenID Connect Provider.

Example:

  "sellers": {
    "primary": {
      // ...
      "authentication": {
        "loginCredentials": {
          "username": "test1",
          "password": "test1"
        }
      }
    },

Request Headers

Just a set of request HTTP headers which will be used to make booking requests. There are no restrictions on the requestHeaders that can be specified.

Example:

  "sellers": {
    "primary": {
      // ...
      "authentication": {
        "loginCredentials": null,
        "requestHeaders": {
          "X-OpenActive-Test-Client-Id": "booking-partner-1",
          "X-OpenActive-Test-Seller-Id": "https://localhost:5001/api/identifiers/sellers/1"
        }
      }
    },

Client Credentials

OAuth Client Credentials are used to make booking requests.

Example:

  "sellers": {
    "primary": {
      // ...
      "authentication": {
        "loginCredentials": null,
        "clientCredentials": {
          "clientId": "clientid_XXX",
          "clientSecret": "example"
        }
      }
    },

This is different from the behaviour in the Client Credentials sub-section mentioned within the OpenID Connect Booking Partner Authentication for Multiple Seller Systems section in the spec as, in this case, Client Credentials are used to make booking requests for this Seller, rather than just to view the Booking Partner's Orders Feed.

Installation

Node.js version 14 or above is required.

npm install

This will install the dependencies needed for all packages in the test suite.

For developers that are customising the installation, for use in e.g. Docker, the directories ./packages/test-interface-criteria and ./packages/openactive-openid-test-client are dependencies, and so must be present during npm install.

Running

Where dev.json is the name of your {NODE_ENV}.json configuration file:

export NODE_ENV=dev
npm start

This will start the broker microservice (openactive-broker-microservice) and run all integration tests (openactive-integration-tests) according to the feature configuration. It will then kill the broker microservice upon test completion. The console output includes both openactive-broker-microservice and openactive-integration-tests. This is perfect for CI, or simple test runs.

Alternatively the Broker microservice and Integration tests may be run separately, for example in two different console windows. This is more useful for debugging.

Running specific tests

Any extra command line arguments will be passed to jest in openactive-integration-tests. For example:

export NODE_ENV=dev
npm start -- --runInBand test/features/core/availability-check/

It is also possible to use a category identifier or feature identifier as short-hand:

export NODE_ENV=dev
npm start -- core
export NODE_ENV=dev
npm start -- availability-check

Read about Jest's command line arguments in their CLI docs.

Environment variables

NODE_CONFIG

The configuration of the test suite can be overridden with the environment variable NODE_CONFIG, where any specified configuration will override values in both config/default.json. More detail can be found in the node-config docs. For example:

NODE_CONFIG='{ "waitForHarvestCompletion": true, "datasetSiteUrl": "https://localhost:5001/openactive", "sellers": { "primary": { "@type": "Organization", "@id": "https://localhost:5001/api/identifiers/sellers/0", "requestHeaders": { "X-OpenActive-Test-Client-Id": "test", "X-OpenActive-Test-Seller-Id": "https://localhost:5001/api/identifiers/sellers/0" } }, "secondary": { "@type": "Person", "@id": "https://localhost:5001/api/identifiers/sellers/1" } }, "useRandomOpportunities": true, "generateConformanceCertificate": true, "conformanceCertificateId": "https://openactive.io/openactive-test-suite/example-output/random/certification/" }' npm start

PORT

Defaults to 3000.

Set PORT to override the default port that the openactive-broker-microservice will expose endpoints on for the openactive-integration-tests. This is useful in the case that you already have a service using port 3000.

FORCE_COLOR

E.g. FORCE_COLOR=1

Set this to force the OpenActive Test Suite to output in colour. The OpenActive Test Suite uses chalk, which attempts to auto-detect the color support of the terminal. For CI environments this detection is often inaccurate, and FORCE_COLOR=1 should be set manually.

Continuous Integration

Assuming configuration is set using the NODE_CONFIG environment variable as described above, the test suite can be run within a continuous integration environment, as shown below:

#!/bin/bash
set -e # exit with nonzero exit code if anything fails

# Get the latest OpenActive Test Suite
git clone [email protected]:openactive/openactive-test-suite.git
cd openactive-test-suite

# Install dependencies
npm install

# Start broker microservice and run tests
npm start

"ci": true must be included in the supplied NODE_CONFIG to ensure correct console logging output within a CI environment.

Note that running npm start in the root openactive-test-suite directory will override waitForHarvestCompletion to true in default.json, so that the openactive-integration-tests will wait for the openactive-broker-microservice to be ready before it begins the test run.

Test Data Requirements

In order to run the tests in random mode, the target Open Booking API implementation will need to have some Opportunity data pre-loaded. Use Test Data Generator to find out how much data is needed and in what configuration.

Contributing

Concepts

Booking Partner Authentication Strategy

The method by which a Booking Partner authenticates with the Open Booking API implementation. There are a number of supported strategies, including OpenID Connect, HTTP Header, etc.

Your impementation will need to support at least one Authentication Strategy for each of Orders Feed Authentication and Booking Authentication.

Orders Feed Authentication

How a Booking Partner accesses the Orders Feed containing updates to Orders that they have created.

For Test Suite, the selected Orders Feed Authentication Strategy is configured with the broker.bookingPartners configuration property and documentation on the supported strategies can be found there.

Booking Authentication

How a Booking Partner accesses the booking endpoints (C1, C2, B, etc) for a specific Seller's data. This differs from Orders Feed Authentication as it can be specified at the per-Seller level for Multiple Seller Systems (relevant feature: multiple-sellers).

For Test Suite, the selected Booking Authentication Strategy is configured with the sellers configuration property and documentation on the supported strategies can be found there.

openactive-test-suite's People

Contributors

civsiv avatar dependabot[bot] avatar eliasfernandez avatar github-actions[bot] avatar henryaddison avatar joshualevett avatar lukedawilson avatar lukehesluke avatar nemanjastefanovic avatar nickevansuk avatar openactive-bot avatar reikyo avatar sbscully avatar thill-odi avatar ylt avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openactive-test-suite's Issues

Add core/availability-check tests

Child of #56

Tests when feature is marked as implemented

  • Given an known opportunity that can be booked
    When running C1 and C2
    Then the values in the responses from the requests should match the known values of the opportunity

  • Given an opportunity that is not bookable
    When running C1 and C2
    Then an OpportunityOfferPairNotBookableError should be returned

  • Given an unknown opportunity (fictional identifiers)
    When running C1 and C2
    Then expect an UnknownOpportunityDetailsError error to be returned

Tests when feature is marked as not-implemented

An no-op failing tests because feature is required.

Add documentation around test event endpoints

The README should include documentation around the test endpoints that this tooling expects to exist that create and delete events

The integration tests rely on the following Test Data Endpoints to facilitate testing, and the service documentation must specify the requirements for these:

  • CreateEvent(type, event)
    • Creates a test event of the specified type in the database
  • DeleteEvent(type, name)
    • Deletes a test event of the specified type with the specified name
  • TriggerSimulation(simulation, name)
    • Triggers a simulation for the event with the specified name, for example “provider-side cancellation” or “change of logistics”
  • Cleanup
    • Calls the “Cleanup test data” in the system

Add travis CI example script

A travis CI example script should be provided that starts the feed listener, then runs it against the .NET reference implementation

Include validation warnings in log output only

Validation warnings were grouped randomly in the output (as they used console.warn).

I've updated them to be included in the log files, which is hopefully a better place for warnings.

Validation errors should be included in test output and log files.

Add user-defined configuration to test suite

An in-memory configuration that reads from JSON file

Will be used in other issues to determine, amongst other things and which features to test, which opportunity types to try, but this issue is about adding a generic configuration system for those later issues to take advantage of.

Clearly reference the relevant log file

Ensure the errors returned by Jest clearly reference the relevant log file where the request/response content can be found, to make it easier to navigate the output

Multiple improvements to test suite

  1. Improve flow.C1():
  • To accept request template file as a parameter, to allow some test suites to customise their request templates
  • Note that some tests will book more than one event, so we will need to account for multiple OrderItems (see comment #34 (comment))
  • Include validation on the output of C1 with the correct validation mode (and if status code is not 200 then use "OpenBookingError" validation mode)
  1. Create something like state.getRandomEvent(["ScheduledSession", "SecheduledSession", "Slot"]) - note this is named generically with the specified type of event passed through to the microservice, preparing the way for #21 - so we can create a suite of tests that works for random stuff specifically. Note passing an array of event types also allows for the creation of more than one event. In fact - perhaps even better - should such an array actually live in this file with a randomEvent: ["ScheduledSession", "SecheduledSession", "Slot"] array used in place of the event array? That way state.createEvent(testEvent) (see 4 below) could do both random and explicit modes depending on testEvent?

  2. Ensure that flow.C1 pattern above extended to all endpoints (https://www.openactive.io/open-booking-api/EditorsDraft/#paths-and-verbs), noting the special treatment of OrdersFeed that we already have in the code

  3. Ensure that the file https://github.com/openactive/openactive-test-suite/blob/enhanced-logging/packages/openactive-integration-tests/test/flows/book-and-cancel/book-and-customer-cancel-success/book-nonfree-test.js can include different @type (e.g. "SchuduledSession") - which it should do without issues. So perhaps we simply rename createScheduledSession() to be more generic (e.g. createEvent()), as it should handle whichever type is specified in the file.

  4. Generalise some of C1, C2 tests etc to provide shared examples that can be reused across shared test suites.

Run subset of tests and address performance issues

Can we specify parameters somehow to only run some of the tests, or otherwise pipe more of the output into the log files?

For example could we add a category (e.g. "C1only", "endtoend") to each of the tests, and use parameters to run only tests in a certain category?

The results are fairly overwhelming on first run

"get-match" result should be validated same as C1 result

The validator should be run over the contents of data within the "get-match" result. The validator should be run in BookableRPDEFeed mode.

Potentially adding something like the below before C1 in book-shared.js, and removing await flow.getMatch(); from beforeAll?

describe("OpportunityFeed", function() {
      (new OpportunityFromFeed({state, flow, logger, dataItem}))
        .beforeSetup()
        .successChecks()
        .validationTests();
    });

Include a description of expected behaviour in tests

Include expected behaviour in the tests somehow, to make it clear what’s happening. Perhaps a description can be added, e.g. "Opportunity.startDate is in the past, which means it is not possible to cancel"

Implement "config" endpoint for Integration Tests

Implement "config" endpoint for Integration Tests to easily read dataset site configuration, and pass on authentication credentials necessary to test the Open Booking API

Refactor Integration Tests to use "config" endpoint instead of its own local config. The configuration from the “config” endpoint should be read at startup, and the tests should not run if there are issues with it.

Run each test using pairs of items of different opportunity type as the test data

It is not enough just to check that the API behaves when all items are other the same order type. Instead there must be test cases covering pairs of order items where the items for different type.

These must cover each possible pairing of available opportunity types (so based on 5 possible types, that's 10 pairs, 5 Choose 2)

Depends on:

  • Supporting multiple types of event [#51]
  • Using more than one event at a time in a test [#53]

Subtasks:

  • When running each test, use pairs of each type of opportunity as the test events data.

Run each test using 1, 2 and 3 items as the test data

See https://github.com/openactive/openactive-test-suite/tree/master/packages/openactive-integration-tests/test/features#testing-scope

All tests are run for the following:

  • 1, 2 and 3 order items of each configured type. Duplicating the defined template in the test.

It is not enough just to check that the API behaves when there is only a single order item. Instead there should be tests cases verifying the implementation when there are 1, 2 and 3 order items (all of the same opportunity type). This should be repeated for each available opportunity type.

To get random events will need to get a sample without replacement (rather than getting single events multiple times) to avoid re-using same event. And for creating events multiple times will need to use a template or something like faker to avoid repeating data which should be unique.

Will need updating the state to pass multiple events at a time through the flow (NB a single event is an array of one item).

NB not every test will make sense with using multiple test events at once.

Depends on:

  • Supporting multiple types of event [#51]

Related:

  • Be aware of #54 to have tests run for pairs of items of different types

Subtasks:

  • When running each test, use arrays of size 1, 2 and 3 as the set of test events for the test

Re-organize folder structure

New folder/file structure for tests: category > feature > flow

Different test events used in runs of each test flow are to be defined in a single file per flow.

Including logger changes and utility scripts will need more re-organizing too.

See also dicussion in #11

Add payment/payment-reconciliation-detail-validation tests

Child of #56

Tests when feature is marked as implemented

Given an opportunity that requires Payment Details being supplied (regex match name, fallback to bookable)
When using accountId "TEST" (NB this value should be configurable)
Then running C1, C2 and B should succeed

Given an opportunity that requires Payment Details being supplied (regex match name, fallback to bookable)
When excluding accountId
Then running C1, C2 and B should fail with InvalidPaymentDetailsError

Tests when feature is marked as not-implemented

None

Depends on

  • Existance of user-defined configuration system #58

Allow user to specify whether to run random, controlled or both types of tests globally

See "Types of test" at https://github.com/openactive/openactive-test-suite/tree/master/packages/openactive-integration-tests/test/features#testing-scope

In random mode, the test will use a random event found in the feed. In controlled mode, the test will create events it needs Controlled tests may allow for more thorough testing but not every API implementation will be able to support the extra API endpoints needed to allow for it. Therefore need to provide a way to allow the user to turn each test mode on or off on a global basis.

How the user can configure the test modes must be documented.

Depends on:

  • Re-organize folder structure, including logger changes [#57]
  • Add user-defined configuration to test suite [#58]
  • Should be an simple extension of #50 to run only random or non-random or both (like implemented or not-implemented).

Subtasks:

  • Add global option to configuration to set the test modes
  • Determine which sets of test events to try with the test flows based on configuration (e.g. just the randomly found events, just the created events, or both)
  • How the user can configure the test modes must be documented.

Travis CI script example

A travis CI example script should be provided that starts the feed listener, then runs it against the .NET reference implementation

Add restriction/booking-window tests

Child of #56

Check documentation for info on Opportuntity-Offer pairs

Tests when feature is marked as implemented

Given an Opportunity and Offer pair with Opportunity's startDate in range of validFromBeforeStartDate of the Offer
When running C1 and C2
Then it should succeed

Given an Opportunity and Offer pair with Opportunity's startDate outside of range of validFromBeforeStartDate of the Offer
When running C1 and C2
Then it should fail with Error [exact error TBC]

Tests when feature is marked as not-implemented

None

Allow user to configure which features should be tested

The API specification has many features not all of which are required to be be implemented in order to be conformant.

The user should be able to say whether each feature is implemented, not-implemented or disable-tests. The behaviour of the tests should match this configuration.

In the case of not-implemented the tests will check it is actually not implemented (rather than ignoring, that's the disable-tests case).

How the user can adjust this configuration and the effects of each mode must be documented.

See https://github.com/openactive/openactive-test-suite/tree/master/packages/openactive-integration-tests/test/features#testing-scope and #11 (comment) for more information.

Config could look like:

"features": {
    "opportunity-feed": "implemented",
    "dataset-site": "implemented",
    "availability-check": "not-implemented",
    "simple-book-free-opportunities": "not-implemented",
    "simple-book-with-payment": "not-implemented",
    "payment-reconciliation-detail-validation": "not-implemented",
    "booking-window": "not-implemented",
    "customer-requested-cancellation": "not-implemented",
    "seller-requested-cancellation": "not-implemented",
    "seller-requested-cancellation-message": "disable-tests",
    "cancellation-window": "implemented",
    "seller-requested-replacement": "implemented",
    "named-leasing": "implemented",
    "anonymous-leasing": "implemented"
  },

Depends on:

  • Re-organize folder structure, including logger changes [#57]
  • Add user-defined configuration to test suite [#58]

Related:

  • Add more tests [#56] - these contain actual examples of non-implemented tests.

Subtasks:

  • Add per-feature option to configuration
  • Determine which test flows to run based on configuration
  • Document for user how to configure behaviour for each feature and the list of available features

Read Feed URLs from dataset site, and ensure appropriate error is thrown and displayed if dataset site is invalid

https://github.com/openactive/data-models/blob/master/versions/2.x/meta.json#L9

Read from dataset site in order to receive URL configuration of feeds, using the “parent” attribute of https://github.com/openactive/data-models/blob/master/versions/2.x/meta.json#L9 to automatically infer “Opportunity” and “OpportunityParent”

What is an OpportunityParent? In fact what is an Opportunity?
See the 'Bookable Objects in the Model' diagram on p. 3 here: https://docs.google.com/document/d/1mo_N1xa0H9D4uVEz3m8kyCPAAXcSEjwpEiv995hDxQs/edit#. Opportunities are the bottom row; their parents are the top (in humanspeak, an Opportunity is an 'opportunity to exercise')

Add core/opportunity-feed tests

Child of #56

Tests when feature is marked as implemented

It should be possible to pick an opportunity from the feed.

Tests when feature is marked as not-implemented

An no-op failing tests because feature is required.

Add more tests

As well as exercising more flows of a booking system, these will also help to cover the new features added by #50 & #52.

The subissues for the extra tests are:

  • Add core/opportunity-feed tests #67
  • Add core/availability-check tests #68
  • Add core/simple-book-free-opportunities tests #69
  • Add payment/simple-book-with-payment tests #70
  • Add payment/payment-reconciliation-detail-validation tests #71
  • Add restriction/booking-window tests #72
  • Add cancellation/order-deletion tests #73
  • Add cancellation/customer-requested-cancellation tests #74
  • Add cancellation/seller-requested-cancellation tests #75
  • Add details-capture/attendee-details-capture-without-payment tests #76

Depends on:

  • Re-organizing the folder structure [#57]
  • Support for testing non-implement feautures [#50] will be required to add every test
  • Some tests will require more options over selection of a random event from feed for a test based on desired criteria of the event e.g. non-bookable opportunity, event with startDate in range of validFromBefore. Currently can only filter based on type of event but #77 should fix this.

Subtasks:

  • Add all the tests defined by the subissues above
  • Document which events should be in the feed already for the tests to work [#78]

Add Config Endpoint to Feed Listener

Implement “config” endpoint to be used by Integration Tests to easily read dataset site configuration, and pass on authentication credentials necessary to test the Open Booking API

Related to #60 and #61

Allow user to configure which opportunity types are supported

See https://github.com/openactive/openactive-test-suite/tree/master/packages/openactive-integration-tests/test/features#testing-scope

Additionally, an array of bookable opportunity types can be configured, to indicate which types the implementation is expected to support

Allow a user to specify which of the 5 opportunity types should be supported and update tests to run variants of each test for each of those supported opportunities.

How the user can configure the available option types must be documented.

As in #11 (comment), the config format can be:

  "opportunity-types": {
    "sessions": true,
    "facilities": true,
    "events": false,
    "headline-events": false,
    "courses": false
  }

Once #59 as been done then can determine all the valid feed types offered by the booking system under test. It has been decided that if a type is configured to be on but not available in the list of valid feed types then there should be a test failure stating this is not valid.

Depends on:

  • Re-organize folder structure, including logger changes [#57]
  • Add user-defined configuration to test suite [#58]
  • Will use very similar approach to per-feature config [#50] and random/controlled config [#52]

Related:

  • Be aware of #53 to have tests run for 1, 2, and 3 items of each type
  • Be aware of #54 to have tests run for pairs of items of different types

Subtasks:

  • Add option to configuration to set the supported types
  • Run tests multiple times using events of each of the configured types
  • Add test failure for types turned on which are not available in feeds
  • Document how the user can configure which types are supported

Add more thorough documentation

Feedback from imin: "Add more thorough documentation. Especially as there's lots of potentially non-obvious design decisions in here. The broker is very simple (which is great!) but the integration tests are more complicated. So I would highly recommend fleshing out the documentation there. Summary of how exactly it works as well as "why?" questions. This is a great start: https://github.com/openactive/openactive-test-suite/blob/master/packages/openactive-integration-tests/DEVELOPMENT.md but it's written rather hastily"

Clean up logging

  • Joe to reduce the size of the titles and clean up logging
  • Joe to create a new PR to output errors alongside items

Add payment/simple-book-with-payment tests

Child of #56

Tests when feature is marked as implemented

Already done

Tests when feature is marked as not-implemented

  • Given an opportunity type
    When checking the opportunity feed
    Then it should not include any bookable sessions with a non-zero price

Output a test result certification zip file

The integration tests should include a script entry in the package.json to output a single Test Result Certification Zip File, which is a zip file containing all test results in a well-defined structure. This script must use only npm dependencies (e.g. gulp, and gulp-zip), and not rely on the local operating system to create the zip, to maximise compatibility.

In addition to containing the console output and log files (as evidence of the requests and responses from the implementation under test), the zip file should contain a machine-readable version of the results, to allow a separate service to easily verify that all tests pass.

Address performance issues

Extracted from #11

Additionally the test suite currently takes around 60 seconds to startup, before it runs any tests.

It outputs jest and then appears to wait for 30 seconds, before outputting Found 7 test suites and then waiting for a further 30 seconds.

Can we improve performance here? What are we doing to slow things down?

May be fixed by making parallel execution of tests configurable, as this can have the effect of slowing down testing for a larger number of tests against a database:

This is already configurable, just needs documenting. Defaults are in the package.json, and can also be changed via commandline args.

npm run test -- --maxWorkers 1

C1 not validating and other race conditions

If you run npm test -- test/flows/book-only/ against the reference implementation on master now you'll see the issues for C1 validation - it's still not running. Very strange!

And if you try pointing it at https://everyoneactivebookingfacade-test.azurewebsites.net/, then many of the endpoints don't validate

Perhaps there's some kind of race condition or similar happening here?

I think I've also might have identified the cause of one of the "race conditions". It looks like the flows are lacking some defensive programming for cases where a prerequisite call fails - specifically those calls that hit the microservice and wait for the feed - getMatch and getOrder. I've made a quick fix for getMatch, but we need something similar for getOrder.

It's also probably worth stopping all the main calls C1, C2, B etc executing if the opportunity couldn't be retrieved from the feed for some reason, as it'll just hit the booking system with loads of broken requests otherwise after the timeout (which may cause more race conditions?). I've tried to do this here but haven't quite managed it: ec96f67#diff-7ec38b867eb5885e1977c92e06a66053

When this issue is closed the tests should be robust regardless of the response times of the endpoints.

Account for multiple OrderItems

Some tests will book more than one event, so we will need to account for multiple OrderItems in the test files (see comment #34 (comment))

We also need to allow randomEvent in each test to accept an array of random events that it will attempt to book, e.g. randomEvent: ["ScheduledSession", "ScheduledSession", "Slot"] resulting in a request with 3 random OrderItems (in a random order), two ScheduledSessions and one Slot.

Add cancellation/order-deletion tests

Child of #56

These tests are to ensure Order Deletion flow step is used in tests as well as implemented in #42

Tests when feature is marked as implemented

Given any known opportunity that can be booked
When running C1, C2 and B and checking Orders feed for Order to then run U and then Order Deletion
Then should find a new entry in Orders Feed to say event is deleted

Tests when feature is marked as not-implemented

No-op failure (required feature)

Document how to run a subset of tests

Extracted from #11.

Document how to use jests own path filtering based on the new folder structure once #57 is done.


NB Other ways to change which tests to run (including their documentation) are covered by the following issues:

  • how to configure which features should be tested - #50
  • how to configure activity types - #51
    • + note that activity types need to be in the dataset - #59

Ensure all “bookable” opportunity types are covered

Extend integration tests to ensure all “bookable” opportunity types are covered, within each flow

https://www.openactive.io/open-booking-api/EditorsDraft/#definition-of-a-bookable-opportunity-and-offer-pair

https://developer.openactive.io/publishing-data/data-feeds/types-of-feed#event-relationship-overview

This is probably just a case of adding additional scenarios within each flow that use different data types, but worth checking there are no constraints within the flows themselves that would prevent this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.