Git Product home page Git Product logo

otpqa's Introduction

Overview

Join the chat at https://gitter.im/opentripplanner/OpenTripPLanner Matrix codecov Commit activity Docker Pulls

OpenTripPlanner (OTP) is an open source multi-modal trip planner, focusing on travel by scheduled public transportation in combination with bicycling, walking, and mobility services including bike share and ride hailing. Its server component runs on any platform with a Java virtual machine ( including Linux, Mac, and Windows). It exposes GraphQL APIs that can be accessed by various clients including open source Javascript components and native mobile applications. It builds its representation of the transportation network from open data in open standard file formats (primarily GTFS and OpenStreetMap). It applies real-time updates and alerts with immediate visibility to clients, finding itineraries that account for disruptions and service changes.

Note that this branch contains OpenTripPlanner 2, the second major version of OTP, which has been under development since 2018. The latest version of OTP is v2.5.0, released in March 2024.

If you do not want to use this version, please switch to the final 1.x release tag v1.5.0 or the dev-1.x branch.

Performance Test

๐Ÿ“Š Dashboard

We run a speed test (included in the code) to measure the performance for every PR merged into OTP.

More information about how to set up and run it.

Repository layout

The main Java server code is in src/main/. OTP also includes a Javascript client based on the Leaflet mapping library in src/client/. This client is now primarily used for testing, with most major deployments building custom clients from reusable components. The Maven build produces a unified ("shaded") JAR file at target/otp-VERSION.jar containing all necessary code and dependencies to run OpenTripPlanner.

Additional information and instructions are available in the main documentation, including a quick introduction.

Development

OpenTripPlanner is a collaborative project incorporating code, translation, and documentation from contributors around the world. We welcome new contributions. Further development guidelines can be found in the documentation.

Development history

The OpenTripPlanner project was launched by Portland, Oregon's transport agency TriMet (http://trimet.org/) in July of 2009. As of this writing in Q3 2020, it has been in development for over ten years. See the main documentation for an overview of OTP history and a list of cities and regions using OTP around the world.

Getting in touch

The fastest way to get help is to use our Gitter chat room where most of the core developers are. Bug reports may be filed via the Github issue tracker. The OpenTripPlanner mailing list is used almost exclusively for project announcements. The mailing list and issue tracker are not intended for support questions or discussions. Please use the chat for this purpose. Other details of project governance can be found in the main documentation.

OTP Ecosystem

  • awesome-transit Community list of transit APIs, apps, datasets, research, and software.

otpqa's People

Contributors

abyrd avatar bmander avatar buma avatar jordenverwer avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

otpqa's Issues

Allow shorter run times with a lot of random endpoints

Our existing approach essentially tries the cross product of all modes, all times of day, and all endpoints. It scales this product down somewhat, but it's still an enormous number of combinations.

This approach was motivated by the possibility of stumbling upon some random origin point that might exhibit interesting characteristics only at a particular time of day, or with a particular mode. This produces an enormous amount of requests though. In practice, to achieve realistic run times we find ourselves drastically limiting the number of random endpoints.

For example, with 1000 random endpoints in non-fast mode we see an estimated run time of over 21 days in NYC.

Note that when holding the total number of requests constant, by virtue of the randomness of the search endpoints, this cross-product approach does not have a higher probability of finding an "interesting" endpoint for a particular set of search parameters, despite testing a much smaller number of locations.

Therefore, I propose that we grab the endpoints two by two rather than doing anything resembling a cross product. We can step over the random endpoints and the search parameter combinations in parallel, looping over the shorter list until both have been exhausted.

It may also be a good idea to take the full list of search parameter combinations from a CSV or JSON file rather than generating them automatically. Generating them by taking the cross product of a bunch of individual parameter values tends to over-represent unusual combinations. There should still be a script to generate "starter" files of request parameters that can then be edited down by the user. The current scripts already allow this approach, but it is not highlighted in the readme.

It is still a good idea to have a slow and a fast mode. Perhaps the slow mode should use the custom endpoints as origins, over the full range of search parameters, using the full range of random endpoints as destinations.

Some usefull changes

I updated a little OTPProfiler:

  • to add total_time/avg_time in seconds in otpprofiler
  • to add real total_time (totalTime from OTP response without roundtrip time)
  • to enable creating of graphs without the database
  • to add optional support for OTP 0.11.x

What is missing is requirements.txt, option to ignore endpoints_custom.csv.

If this is useful I can do PR or just commit.

Do not convert endpoints from CSV to JSON

The currently workflow involves running a script that generates random endpoints in CSV, then another one that ingests the endpoints from CSV and outputs them unchanged to JSON. The final profiling script then consumes the JSON.

There does not seem to be any need for these multiple transformations -- the profiler script should just load the CSV directly, or the endpoints should be saved to JSON from the beginning. Given the tabular nature of endpoint lists, CSV seems like a good option.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.