Git Product home page Git Product logo

o19s / quepid Goto Github PK

View Code? Open in Web Editor NEW
260.0 18.0 99.0 77.01 MB

Improve your Elasticsearch, OpenSearch, Solr, Vectara, Algolia and Custom Search search quality.

Home Page: http://www.quepid.com

License: Apache License 2.0

Ruby 51.62% JavaScript 27.42% HTML 19.15% CSS 0.55% Shell 0.32% SCSS 0.88% Procfile 0.01% Dockerfile 0.04%
elasticsearch solr search-quality-evaluation apachesolr information-retrieval search opensearch vectara algolia algolia-search

quepid's Introduction

Quepid

License CircleCI Docker Hub Rails Style Guide Slack

Quepid logo

Quepid makes improving your app's search results a repeatable, reliable engineering process that the whole team can understand. It deals with three issues:

  1. Our collaboration stinks Making holistic progress on search requires deep, cross-functional collaboration. Shooting emails or tracking search requirements in spreadsheets won't cut it.

  2. Search testing is hard Search changes are cross-cutting: most changes will cause problems. Testing is difficult: you can't run hundreds of searches after every relevance change.

  3. Iterations are slow Moving forward seems impossible. To avoid sliding backwards, progress is slow. Many simply give up on search, depriving users of the means to find critical information.

To learn more, please check out the Quepid website and the Quepid wiki.

If you are ready to dive right in, you can use the Hosted Quepid service right now or follow the installation steps to set up your own instance of Quepid.

Table of Contents

Below is information related to developing the Quepid open source project, primarily for people interested in extending what Quepid can do!

Development Setup

I. System Dependencies

Using Docker Compose

Provisioning from an already built machine takes approximately 3 - 4 minutes. Provisioning from scratch takes approximately 20 minutes.

1. Prerequisites

Make sure you have installed Docker. Go here https://www.docker.com/community-edition#/download for installation instructions. And the Docker app is launched.

To install using brew follow these steps:

brew cask install docker
brew cask install docker-toolbox

NOTE: you may get a warning about trusting Oracle on the first try. Open System Preferences > Security & Privacy, click the Allow Oracle button, and then try again to install docker-toolbox

2. Setup your environment

Run the local Ruby based setup script to setup your Docker images:

bin/setup_docker

If you want to create some cases that have 100's and 1000's of queries, then do:

 bin/docker r bundle exec thor sample_data:large_data

This is useful for stress testing Quepid! Especially the front end application!

Lastly, to run the Jupyter notebooks, you need to run:

bin/setup_jupyterlite

3. Running the app

Now fire up Quepid locally at http://localhost:

bin/docker server

It can take up to a minute for the server to respond as it compiles all the front end assets on the first call.

We've created a helper script to run and manage the app through docker that wraps around the docker-compose command. You will need Ruby installed. You can still use docker-compose directly, but for the basic stuff you can use the following:

  • Start the app: bin/docker server or bin/docker s
  • Connect to the app container with bash: bin/docker bash or bin/docker ba
  • Connect to the Rails console: bin/docker console or bin/docker c
  • Run any command: bin/docker run [COMMAND] or bin/docker r [COMMAND]
  • Run dev mode as daemon: bin/docker daemon or bin/docker q
  • Destroy the Docker env: bin/docker destroy or bin/docker d
  • Run front end unit tests: bin/docker r rails test:frontend
  • Run back end unit tests: bin/docker r rails test

II. Development Log

While running the app under foreman, you'll only see a request log, for more detailed logging run the following:

tail -f log/development.log

III. Run Tests

There are three types of tests that you can run:

Minitest

These tests run the tests from the Rails side (mainly API controllers, and models):

bin/docker r rails test

Run a single test file via:

bin/docker r rails test test/models/user_test.rb

Or even a single test in a test file by passing in the line number!

bin/docker r rails test test/models/user_test.rb:33

If you need to reset your test database setup then run:

bin/docker r bin/rake db:drop RAILS_ENV=test
bin/docker r bin/rake db:create RAILS_ENV=test

View the logs generated during testing set config.log_level = :debug in test.rb and then tail the log file via:

tail -f log/test.log

JS Lint

To check the JS syntax:

bin/docker r rails test:jshint

Karma

Runs tests for the Angular side. There are two modes for the karma tests:

  • Single run: bin/docker r rails karma:run
  • Continuous/watched run: bin/docker r bin/rake karma:start

Note: The karma tests require the assets to be precompiled, which adds a significant amount of time to the test run. If you are only making changes to the test/spec files, then it is recommended you run the tests in watch mode (bin/docker r bin/rake karma:start). The caveat is that any time you make a change to the app files, you will have to restart the process (or use the single run mode).

Rubocop

To check the Ruby syntax:

bin/docker r bundle exec rubocop

Rubocop can often autocorrect many of the lint issues it runs into via --autocorrect-all:

bin/docker r bundle exec rubocop --autocorrect-all

If there is a new "Cop" as they call their rules that we don't like, you can add it to the ./rubocop.yml file.

All Tests

If you want to run all of the tests in one go (before you commit and push for example), just run these two commands:

bin/docker r rails test
bin/docker r rails test:frontend

For some reason we can't run both with one command, though we should be able to!.

Performance Testing

If you want to create a LOT of queries for a user for testing, then run

bin/docker r bin/rake db:seed:large_cases

You will have two users, [email protected] and [email protected] to test with.

Notebook Testing

If you want to test the Jupyterlite notebooks, or work with a "real" case and book, then run

bin/docker r bundle exec thor sample_data:haystack_party

You will have lots of user data from the Haystack rating party book and case to work with. This data is source from the public case https://app.quepid.com/case/6789/try/12?sort=default and https://app.quepid.com/books/25

IV. Debugging

Debugging Ruby

Debugging ruby usually depends on the situation, the simplest way is to print out the object to the STDOUT:

puts object         # Prints out the .to_s method of the object
puts object.inspect # Inspects the object and prints it out (includes the attributes)
pp object           # Pretty Prints the inspected object (like .inspect but better)

In the Rails application you can use the logger for the output:

Rails.logger object.inspect

If that's not enough and you want to run a debugger, the debug gem is included for that. See https://guides.rubyonrails.org/debugging_rails_applications.html#debugging-with-the-debug-gem.

Also, we have the derailed gem available which helps you understand memory issues.

bin/docker r bundle exec derailed bundle:mem

Debugging JS

While running the application, you can debug the javascript using your favorite tool, the way you've always done it.

The javascript files will be concatenated into one file, using the rails asset pipeline.

You can turn that off by toggling the following flag in config/environments/development.rb:

# config.assets.debug = true
config.assets.debug = false

to

config.assets.debug = true
# config.assets.debug = false

Because there are too many Angular JS files in this application, and in debug mode Rails will try to load every file separately, that slows down the application, and becomes really annoying in development mode to wait for the scripts to load. Which is why it is turned off by default.

PS: Don't forget to restart the server when you change the config.

Also please note that the files secure.js, application.js, and admin.js are used to load all the JavaScript and CSS dependencies via the Rails Asset pipeline. If you are debugging Bootstrap, then you will want individual files. So replace //= require sprockets with //= require bootstrap-sprockets.

Webpacker

To use webpacker, that will compile javascript code into packs and will load changes faster, you need to

bin/rails webpacker:install

Prior to that I had to install:

brew install mysql

Debugging Splainer and other NPM packages

docker-compose.override.yml.example can be copied to docker-compose.override.yml and use it to override environment variables or work with a local copy of the splainer-search JS library during development defined in docker-compose.yml. Example is included. Just update the path to splainer-search with your local checkout! https://docs.docker.com/compose/extends/

Convenience Scripts

This application has two ways of running scripts: rake & thor.

Rake is great for simple tasks that depend on the application environment, and default tasks that come by default with Rails.

Whereas Thor is a more powerful tool for writing scripts that take in args much more nicely than Rake.

Rake

To see what rake tasks are available run:

bin/docker r bin/rake -T

Note: the use of bin/rake makes sure that the version of rake that is running is the one locked to the app's Gemfile.lock (to avoid conflicts with other versions that might be installed on your system). This is equivalent of bundle exec rake.

Common rake tasks that you might use:

# db
bin/docker r bin/rake db:create
bin/docker r bin/rake db:drop
bin/docker r bin/rake db:migrate
bin/docker r bin/rake db:rollback
bin/docker r bin/rake db:schema:load
bin/docker r bin/rake db:seed
bin/docker r bin/rake db:setup

# show routes
bin/docker r bin/rails routes

# tests
bin/docker r rails test
bin/docker r rails test:frontend
bin/docker r bin/rake test:jshint

Thor

The see available tasks:

bin/docker r bundle exec thor list

Additional documentation is in Operating Documentation.

Elasticsearch

You will need to configure Elasticsearch to accept requests from the browser using CORS. To enable CORS, add the following to elasticsearch's config file. Usually, this file is located near the elasticsearch executable at config/elasticsearch.yml.

http.cors:
  enabled: true
  allow-origin: /https?:\/\/localhost(:[0-9]+)?/

See more details on the wiki at https://github.com/o19s/quepid/wiki/Troubleshooting-Elasticsearch-and-Quepid

Dev Errata

I'd like to use a new Node module, or update a existing one

Typically you would simply do:

bin/docker r yarn add foobar

or

bin/docker r yarn upgrade foobar

which will install/upgrade the Node module, and then save that dependency to package.json.

Then check in the updated package.json and yarn.lock files.

Use bin/docker r yarn outdated to see what packages you can update!!!!

I'd like to use a new Ruby Gem, or update an existing one

Typically you would simply do:

bin/docker r bundle add foobar

which will install the new Gem, and then save that dependency to Gemfile.

You can also upgrade a gem that doesn't have a specific version in Gemfile via:

bin/docker r bundle update foobar

You can remove a gem via:

bin/docker r bundle remove foobar --install

Then check in the updated Gemfile and Gemfile.lock files. For good measure run the bin/setup_docker.

To understand if you have gems that are out of date run:

bin/docker r bundle outdated --groups

How to test nesting Quepid under a domain.

Uncomment in docker-compose.yml the setting - RAILS_RELATIVE_URL_ROOT=/quepid-app and then open http://localhost:3000/quepid-app.

I'd like to run and test out a local PRODUCTION build

Those steps should get you up and running locally a production build (versus the developer build) of Quepid.

  • Make the desired changes to the code
  • From the root dir in the project run the following to build a new docker image:
docker build -t o19s/quepid -f Dockerfile.prod .

This could error on first run. Try again if that happens

  • Tag a new version of your image.
  • You can either hard code your version or use a sys var for it (like QUEPID_VERSION=10.0.0) or if you prefer use 'latest'
docker tag o19s/quepid o19s/quepid:$QUEPID_VERSION
  • Bring up the mysql container
docker-compose up -d mysql
  • Run the initialization scripts. This can take a few seconds
docker-compose run --rm app bin/rake db:setup
  • Update your docker-compose.prod.yml file to use your image by updating the image version in the app image: o19s/quepid:10.0.0

  • Start up the app either as a Daemon (-d) or as an active container

docker-compose up [-d]

I'd like to test SSL

There's a directory .ssl that contains they key and cert files used for SSL. This is a self signed generated certificate for use in development ONLY!

The key/cert were generated using the following command:

openssl req -new -newkey rsa:2048 -sha1 -days 365 -nodes -x509 -keyout .ssl/localhost.key -out .ssl/localhost.crt

PS: It is not necessary to do that again.

The docker-compose.yml file contains an nginx reverse proxy that uses these certificates. You can access Quepid at https://localhost or http://localhost. (Quepid will still be available over http on port 80.)

I'd like to test OpenID Auth

Add dev docs here!

The developer deploy of Keycloak Admin console credentials are admin and password.

Modifying the database

Here is an example of generating a migration:

bin/docker r bundle exec bin/rails g migration FixCuratorVariablesTriesForeignKeyName

Followed by bin/docker r bundle exec rake db:migrate

You should also update the schema annotation data by running bin/docker r bundle exec annotations when you change the schema.

Updating RubyGems

Modify the file Gemfile and then run:

bin/docker r bundle install

You will see a updated Gemfile.lock, go ahead and check it and Gemfile into Git.

How does the Frontend work?

We use Angular 1 for the front end, and as part of that we use the angular-ui-bootstrap package for all our UI components. This package is tied to Bootstrap version 3. We import the Bootstrap 3 CSS directly via the file bootstrap.css.

For the various Admin pages, we actually are using Bootstrap 5! That is included via the package.json using NPM. See admin.js for the line //= require bootstrap/dist/js/bootstrap.bundle which is where we are including.

We currently use Rails Sprockets to compile everything, but do have dreams of moving the JavaScript over to Webpacker.

Fonts

The aller font face is from FontSquirrel, and the .ttf is converted into .woff2 format.

I'd like to develop Jupyterlite

Run the ./bin/setup_jupyterlite to update the archive file ./jupyterlite/notebooks.gz. This also sets up the static files in the ./public/notebooks directory. However, so that we don't check in hundreds of files, we ignore that directory from Github. At asset:precompile time we unpack the ./jupyterlite/notebooks.gz file instead. This works on Heroku and the production Docker image.

To update the version of Jupyterlite edit Dockerfile.dev and Dockerfile.prod and update the pip install version.

Question? Does jupyterlite work in localhost????

How does the Personal Access Tokens work?

See this great blog post: https://keygen.sh/blog/how-to-implement-api-key-authentication-in-rails-without-devise/.

QA

There is a code deployment pipeline to the http://quepid-staging.herokuapp.com site that is run on successful commits to main.

If you have pending migrations you will need to run them via:

heroku run bin/rake db:migrate -a quepid-staging
heroku restart -a quepid-staging

Seed Data

The following accounts are created through the bin/setup_docker process. They all follow the following format:

email: quepid+[type]@o19s.com
password: password

where type is one of the following:

  • admin: An admin account
  • realisticActivity: A user with a various cases that demonstrate Quepid, including the Haystack Rating Party demo case and book and is a member of the 'OSC' team.
  • 100sOfQueries: A user with a Solr case that has 100s of queries (usually disabled)
  • 1000sOfQueries: A user with a Solr case that has 1000s of queries (usually disabled)
  • oscOwner: A user who owns the team 'OSC'
  • oscMember: A user who is a member of the team 'OSC'

Data Map

Check out the Data Mapping file for more info about the data structure of the app.

Rebuild the ERD via bin/docker r bundle exec rake erd:image

App Structure

Check out the App Structure file for more info on how Quepid is structured.

Operating Documentation

Check out the Operating Documentation file for more informations how Quepid can be operated and configured for your company.

๐Ÿ™ Thank You's

Quepid would not be possible without the contributions from many individuals and organizations.

Specifically we would like to thank Erik Bugge and the folks at Kobler for funding the Only Rated feature released in Quepid 6.4.0.

Quepid wasn't always open source! Check out the credits for a list of contributors to the project.

If you would like to fund development of a new feature for Quepid do get in touch!

๐ŸŒŸ Contributors

quepid  contributors

quepid's People

Contributors

abhishekchoudhary93 avatar atarora avatar binarymax avatar cgamesplay avatar david-fisher avatar depahelix2021 avatar dmitrykey avatar epugh avatar jacobgraves avatar jzonthemtn avatar kennylindahl avatar liucao0614 avatar michaelcizmar avatar mkr avatar moshebla avatar nathancday avatar nicholaskwan avatar okkeklein avatar pfries avatar rbednarzcbi avatar reid-rigo avatar renovate[bot] avatar slawmac avatar sumitsarkar avatar tboeghk avatar thesench avatar tiagoshin avatar tonomonic avatar worleydl avatar ychaker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

quepid's Issues

db:seed:test rake task really is db:seed:sample, it isn't part of the test process, it's to have sample users!

Is your feature request related to a problem? Please describe.
When doing development you want some sample users to validate various use cases. The method you run is:

bin/rake db:seed:test

This makes you think it ahs something to do with setting up the testing environment for Quepid, probably populating quepid_test database! But it doesn't, it populates your quepid_development database. So lets change this to something more generic. Because you could deploy a production evnerionment, with quepid_production database, and get some sampel users via bin/rake db:seed:sample RAILS_ENV=production for example!

Describe the solution you'd like
rename method and ocs.

Comparing of snapshots doesn't work with russian letters

Describe the bug
"Comparing of snapshots doesn't work, if one of the queries in Relevancy cases contains russian letters "

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'Relevancy cases'
  2. Create any snapshot
  3. Add query with russian letters. 'ะพะฑะพะธ', for example
  4. Go to 'Compare snapshot'
  5. Choose your snapshot
  6. See empty results on the screen at the right side

Expected behavior
If all queries doesn't contain russian letters, everything works as well, we can look at old and new search results.

Screenshots
screen2
screen1

Desktop (please complete the following information):

  • OS: Windows
  • Browser: Chrome
  • Version 75.0.3770.142

Rating documents whose ID contains a period rejected by Quepid API!

Describe the bug
When your ID of your document contains a ".", for example "mydoc.pdf", then the Quepid API rejects it.

To Reproduce
Steps to reproduce the behavior:

  1. Make sure you have a document whose id has a period in it, and it ISN'T also a URL. ID's that look like http://www.mysite.com/doc.pdf are properly handled.
  2. Try to rate it, and you'll see nothing happens.

Expected behavior
I expect a rating to happen if the id is mydoc.pdf or 1234 or http://www.mysite.com/mydoc.pdf

Screenshots
Error in the browser:
Screenshot at Aug 30 18-38-05

Digging in locally in developer mode:
Screenshot at Aug 30 18-38-37

Compare result-counts

Is your feature request related to a problem? Please describe.
Comparing snapshots helps highlight jumps and dips in precision (or f(X)@10 depending on what you're measuring). However, the only proxy we have for "losing recall" (which tends to happen when you tune-up search precision), is the Results count from the current query.

Describe the solution you'd like
I would like to see the number of results returned from the snapshot alongside the number of results from the current query.

Describe alternatives you've considered
A more sophisticated recall-measurement solution. But do this hits-diff bit first!

curator variables (i.e knobs and dials) that aren't used in query cause weird UI

If you have defined query varialbe, like ##titleBoost##, and don't use it in the template, then the Your Knobs screen has an odd UI.

To Reproduce
Steps to reproduce the behavior:

  1. Create a TMDB basic case with Solr.
  2. Define the ##titleBoost## curator variable.
  3. Confirm under Tune Relevance -> Your Knobs it shows up.
  4. Change to the default ES seawrch engine.
  5. Rerun query with the default ES query template.
  6. Check your knobs, you don't get the help message, and you do have a blank Your Knobs.

Expected behavior
Show me all knobs defined. Tell me which are in use or not.

Screenshots
Screenshot at Jan 13 14-09-27

Additional context
Someday we should add a DELETE function on the knobs and dials!

Search Result Templates

From #56 (comment) it was requested to support custom templates in the search result listing.

Describe the solution you'd like
Add templates to cases that default to the current template in Quepid. Allow for users to customize the display based on the available data.

Additional context
Because the app is angular based the templates will need to be angular. We can document the available data and allow users to render/style however they want. This can help support users that would prefer seeing search results they are familiar with if they can match the style of their own SERP.

Showing multiple query scores

Is your feature request related to a problem? Please describe.
This is an improvement which will help measuring query scores from different angles.

For all intents and purposes, #50 is in fact a private case of the approach suggested here.

Describe the solution you'd like
Right now a single query score is computed and displayed. A single scorer is selected and used to compute those scores, which is then aggregated to become the Case Score.

Describe alternatives you've considered
We would like to be able to optionally show multiple scores - e.g. CG, NDCG@10, ERR, P@1, P@10 etc. Visually it's definitely possible to show at least 3 scores per query, and this can provide great benefit while tuning relevance - during the initial discovery and research phases, but also for on-going efforts and regression testing.

All the infrastructure for this is already built-in into quepid. Our suggestion is to allow picking more than one scorer to be used for a case. Each scorer will compute it's own score per query, and all computed scorers will be displayed with a clear indication of the scorer name used to produced that score.

A case score can then be decided to be the mean of all query-scores for selected scorers (one or more).

Comparing snapshots should also take all scorers into account (which is in-fact the goal of this change)

Please let me know your thoughts and if all the above makes sense I can go ahead and implement this change.

Nested JSON for field in ES returns as JSON sometimes and genres: [object Object],[object Object] other times!

Describe the bug
Sometimes we render JSON and other times render as "[object Object],[object Object],[object Object],[object Object]".

To Reproduce
Here is a case demonstrating this problem: http://app.quepid.com/case/3166/try/1?sort=default

Steps to reproduce the behavior:

  1. Create a case with default Elasticsearch.
  2. add the query "underdog boxer"
  3. Look at the results.

Expected behavior
The [object object] thing is clearly weird. What can we do about that? JSON nested is maybe okay?

Screenshots
Screenshot at Jan 08 14-43-51

Link to Troubleshooting page for Solr

Is your feature request related to a problem? Please describe.
As of Solr 8.4.1 we need to tweak solr to have connectivity. There is now a wiki page for this.

Describe the solution you'd like
When I can't connect to solr in quepid it encoursages me to check the instance to see if it's running, and mentions adblockers. Lets link to wiki page.

Migrate the NDCG@10 scorer from app.quepid.com to Docker version and Dev setup

Describe the bug
We have created a NDCG@10 scorer that is the default scorer in app.quepid.com. However, we haven't migrated that scorer to either the Docker version of Quepid OR the dev version. This means if you run Quepid in either of those modes, then you must copy the NDCG@10 scorer from app.quepid.com to your setup.

To Reproduce
Steps to reproduce the behavior:

  1. Look at your custom scorers, you will notice you don't have the NDCG@10 that is in app.quepid.com.

Expected behavior
We expect a set of different scorers to be available regardless of if you are running Quepid as a developer, or on prem via Docker image, or on app.quepid.com. These should include P@, DCG@, CG@, MRR@ to start with.

Additional context
Right now we only have the old V1 scorer available to dev and docker, and our better newer scorers only in app.quepid.com. There isn't a well defined "seed" apprach.

Maybe we should think about having our scorers that we ship in a db/scorers, and that an initializer checks and updates them in the database if the scorer.js is newer that what is in the database?

No COOKIES_URL set, still get cookie popup..

Describe the bug
Quepid consent for cookies pops up regardless of if you have COOKIES_URL set..

To Reproduce
Steps to reproduce the behavior:
Remove COOKIES_URL, open in Private Window.

Expected behavior
Only show popup if you have a COOKIES_URL property.

Allow a link: tag in the field spec to format a field as a Anchor Link

Is your feature request related to a problem? Please describe.
Sometimes what is returned in Quepid isn't enough to evaluate the document. You need more context, which sometimes is best provided in the original website, or, in our case, stored in a remote PDF document.

Describe the solution you'd like
We want to be able to provide HTML link to pop open a document in a new browser. url:DOCUMENT_URL_FIELD would render a html link to the URL in the DOCUMENT_URL_FIELD.

Describe alternatives you've considered
Really want to have custom renderes for search results, where formatting of a a href link could be done, but that seems too much to tackle technically.

Thought about varilable interprolation in the url:, so you could do url:http://example.com/docs/{{doc.document_id}}.html, but again that is tougher...

Additional context
Nope.

NDCG@10 doesn't include documents rated from Explain Other

The ndcg@10 appears to only look at the first 10 search results, which I think is the NDCG Local implementation. We notice that for a query that has only a single result, because of how NDCG works, no matter the rating, it scores 100. This makes sense.

However, when we use the Explain Other to find other documents that are relevant and score them, because they don't show up in the search results, the score for the 1 doc result remains 100. We think we should look at the explain other rated documents as well if we don't either have 10 results, or we should use all the explain other results (and make it easy to find them in the UI).

Encourage people to use Quepid Wiki for help

We are starting to use the Github provided Wiki for documentation related to Quepid. Someday we may end up migrating all of the https://quepid.com/docs/ content over to the wiki, since that is locked up today in the Quepid marketing site which isn't a public repo.

Describe the solution you'd like
Start out with just adding a Wiki link to the toolbar in Quepid.

"nDCG@10" scorer always returns 100

Describe the bug
nDCG@10 scorer is returning a score of 100, regardless of how I score the documents the query is returning.

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'Movies Search' case
  2. Ensure Scorer is set to nDCG@10
  3. Add a query for 'test'
  4. Score all documents as a 1 (Poor)
  5. Note that overall score remains at 100

Expected behavior
Marking all results as "Poor" should cause the overall score for the query to be lowered.

Screenshots
If applicable, add screenshots to help explain your problem.
image

Additional context
Add any other context about the problem here.

Consolidate scorers in schema

Describe the bug
Less of a bug and just cleanup to the schema. Scorers and DefaultScorers overlap each other enough that they can be merged to cleanup some code. Consolidate the scorers into one scorers table with a default flag and any other fields that may need be necessary and remove the default_scorers table.

Additional context
This will require changing the schema then updating the rails and angular bit to make use of one table versus two different ones.

Migrating to a more modern codebase

Hello there,

We have been using (and supporting) quepid for a while, and I'd like to start a discussion about modernizing the Quepid codebase. Quepid is currently running on RoR as backend (old version, and a pretty much a dead technology now) and Angular 1.something (very very old and EOL).

Would you consider rebuilding the project, piece by piece, using modern technologies? We can help. We propose using React for the frontend, and Typescript/NodeJS or Go for the backend.

Would love to hear your thoughts.

Prevent unrelated changes from impacting Docker cache

Is your feature request related to a problem? Please describe.
I would like to be able to modify files like the Dockerfile or docker-compose.yml without breaking the caching on step 13 of the dockerfile (copying everything into the container).

Describe the solution you'd like
Either a .dockerignore file should be added that (at a minimum) excludes Dockerfiles and docker-compose.yml files, or the code that belongs inside the container should be moved to a subfolder, and only that code should be copied into it.

Additional context
I am trying to get the project building and running behind a pile of proxies. This involves changing some environment variables and adding some steps to the Dockerfile. Every time I make a change, I have to wait for RUN bundle install to run because all changes invalidate the cache on the COPY . /srv/app step.

Ability to select which hits to compare, instead of raw top-10

Is your feature request related to a problem? Please describe.
We want to track a certain set of specific documents from the result set, not necessarily the top 10.

Describe the solution you'd like
An easy way to tag subset of docs to compare. Ability to toggle between normal view and this subset view.

We want to concentrate on rating a subset of docuemnts relative to each other and to have the query's score calculated as if the selected documents were the top hits. Thus track how these documents relate to eachother during tuning.

Describe alternatives you've considered
A workaround for us is to manually add a query filter &fq=idfield:(A C F G I) to the case to only consider the selected documents. But it would be easier if this was a feature with tagging of docs.

Additional context
Our use case is not a traditional SERP page, but we use search to select articles that are a good fit for a certain topic, expressed through a query. The long tail of such a search below a certain threshold will have low quality, and we want to inspect a number of hits near this threshold, since our top-10 hits are normally OK.

Case shared via Team with me has NaN for the Try

Describe the bug
A Case shared via a Team with me. The Case on the Team page doesn't have a Try no, and when I click through the url has a NaN.

To Reproduce
Steps to reproduce the behavior:

  1. Not sure yet.

Expected behavior
Should just be try 0 I think!

Screenshots
Screenshot at Feb 26 14-25-16

leads to

Screenshot at Feb 26 14-25-42

**Additional context

Docker-Compose Rails connects to MySQL TOO quickly

Describe the bug
Sometimes the Rails image tries to connect to MySQL image before it has fully spun up...

To Reproduce
Steps to reproduce the behavior:

  1. hard to duplicate, but it happens on ./bin/setup_docker or on docker-compose up with production setup.

Expected behavior
Not have this issue!

Screenshots

Running via Spring preloader in process 20
#<Mysql2::Error::ConnectionError: Can't connect to MySQL server on 'mysql' (111 "Connection refused")>
Couldn't create database for {"adapter"=>"mysql2", "encoding"=>"utf8mb4", "collation"=>"utf8mb4_bin", "reconnect"=>false, "pool"=>5, "username"=>"root", "password"=>"password", "host"=>"mysql", "port"=>3306, "database"=>"quepid"}, {:charset=>"utf8mb4", :collation=>"utf8mb4_bin"}
(If you set the charset manually, make sure you have a matching collation)
-- create_table("annotations", {:force=>:cascade})
rake aborted!
Mysql2::Error::ConnectionError: Can't connect to MySQL server on 'mysql' (111 "Connection refused")
/usr/local/bundle/gems/mysql2-0.5.2/lib/mysql2/client.rb:90:in `connect'
/usr/local/bundle/gems/mysql2-0.5.2/lib/mysql2/client.rb:90:in `initialize'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/connection_adapters/mysql2_adapter.rb:18:in `new'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/connection_adapters/mysql2_adapter.rb:18:in `mysql2_connection'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/connection_adapters/abstract/connection_pool.rb:438:in `new_connection'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/connection_adapters/abstract/connection_pool.rb:448:in `checkout_new_connection'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/connection_adapters/abstract/connection_pool.rb:422:in `acquire_connection'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/connection_adapters/abstract/connection_pool.rb:349:in `block in checkout'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/connection_adapters/abstract/connection_pool.rb:348:in `checkout'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/connection_adapters/abstract/connection_pool.rb:263:in `block in connection'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/connection_adapters/abstract/connection_pool.rb:262:in `connection'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/connection_adapters/abstract/connection_pool.rb:571:in `retrieve_connection'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/connection_handling.rb:113:in `retrieve_connection'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/connection_handling.rb:87:in `connection'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/migration.rb:648:in `connection'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/migration.rb:664:in `block in method_missing'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/migration.rb:634:in `block in say_with_time'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/migration.rb:634:in `say_with_time'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/migration.rb:654:in `method_missing'
/srv/app/db/schema.rb:16:in `block in <top (required)>'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/schema.rb:41:in `instance_eval'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/schema.rb:41:in `define'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/schema.rb:61:in `define'
/srv/app/db/schema.rb:14:in `<top (required)>'
/usr/local/bundle/gems/activesupport-4.2.11/lib/active_support/dependencies.rb:268:in `load'
/usr/local/bundle/gems/activesupport-4.2.11/lib/active_support/dependencies.rb:268:in `block in load'
/usr/local/bundle/gems/activesupport-4.2.11/lib/active_support/dependencies.rb:240:in `load_dependency'
/usr/local/bundle/gems/activesupport-4.2.11/lib/active_support/dependencies.rb:268:in `load'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/tasks/database_tasks.rb:221:in `load_schema_for'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/tasks/database_tasks.rb:238:in `block in load_schema_current'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/tasks/database_tasks.rb:278:in `block in each_current_configuration'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/tasks/database_tasks.rb:277:in `each'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/tasks/database_tasks.rb:277:in `each_current_configuration'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/tasks/database_tasks.rb:237:in `load_schema_current'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/railties/databases.rake:237:in `block (3 levels) in <top (required)>'
/usr/local/bundle/gems/activerecord-4.2.11/lib/active_record/railties/databases.rake:241:in `block (3 levels) in <top (required)>'
/usr/local/bundle/gems/activesupport-4.2.11/lib/active_support/dependencies.rb:268:in `load'
/usr/local/bundle/gems/activesupport-4.2.11/lib/active_support/dependencies.rb:268:in `block in load'
/usr/local/bundle/gems/activesupport-4.2.11/lib/active_support/dependencies.rb:240:in `load_dependency'
/usr/local/bundle/gems/activesupport-4.2.11/lib/active_support/dependencies.rb:268:in `load'
-e:1:in `<main>'
Tasks: TOP => db:schema:load
(See full trace by running task with --trace)

Additional context
This is periodically seen, and was reported by a community member "in the wild".

"i.markUnscored is not a function" error when creating a new case may point to stalled "Updateing Queries"

Describe the bug
I created a new case, and then got the stalled "Updating Queries" message. Looking in the debugger I see

i.markUnscored is not a function

Unfortunantly on app.quepid.com we use compiled javascript, so I can really see which file this error is coming from.

To Reproduce
Steps to reproduce the behavior:

  1. Create a new case
  2. Add a single query "star"
  3. Finish and see error.

I am using Firefox

Screenshots
see the stalled "updating queries"
Screenshot at Jan 16 08-15-30

Additional context
Is there any way to identify this error, even if we don't know how to fix it, and message the user "Please reload the webapp".

I think this problem only occurs coming out of the setup wizard...

Retire the "Communal Scorer" Idea?

It turns out you can create a Scorer that is labeled as "Communal", which means it isn't a system default scorer OR a team scorer. In production we have exactly 0 of these!

I did dig around, and found out that it is still supported, in the sense that you can pick a communal scorer:

Screenshot at Mar 14 10-40-01

However, it seems like the useful of this feature isn't really there. If you run your own Quepid, you would use the Default Scorer capablity. Or, you use the Team capablity and share it with your team. I think this is from the Dawn of the Quetaceous Era, when we didn't have Teams.

Describe the solution you'd like
I would like to remove this concept from Quepid.

Describe alternatives you've considered
We could document this feature, but again, not quite seeing the use case.

Additional context
We need @softwaredoug and maybe @ychaker or @omnifroodle to weigh in here.

URL detection treats any occurance of http as a url

Describe the bug

The new auto-URL feature introduced here #27 will wrap any string containing the substring "http" anywhere in the string.

To Reproduce

  1. Add a display field for a document that has http in the value but not a url.
  2. Check the display and the entire value is hyperlinked.

Expected behavior
Only strings beginning with http should be hyperlinked.

Can't Rename a Case from the Team page, but you can on the View Cases page

When you are on a specfic team page (http://app.quepid.com/teams/88) you see all the Cases. You have some options to manipulate them, for example, rename them, however that method fails.

To Reproduce
Steps to reproduce the behavior:

  1. Create a case and a team.
  2. share case with team.
  3. On the List cases page, try and rename, it will owrk.
  4. Go to the Team page and try and rename the same case, it won't.

Look in the Browser console and you will see:

TypeError: "ctrl.thisCase.rename is not a function"

Expected behavior
You should be able to rename the case.

Additional context
What appears to be happening is that when you load a team, we just hit the API, grab the team, and via a ?load_cases method, we get the case data as well. This means that we don't go through the normal creation of a Case object via the caseSvc, so our thisCase object doesn't have the rename method added to it.

I think we need to rethink this. Should we have the load_case parameter? Or maybe we should add some method to caseSvc that gets them all by team_id? Or, should we return a list of case ids, and then go out and get them.

This was discovered during work on #61

Mysql Container doesn't fully finish in time on ./bin/docker_setup nobuild

Describe the bug
Running ./bin/setup_docker nobuild the mysql container may not have fully started before Rails tries to connect. If you re-run the command, it will work, because mysql has had enough time, but obviously that isn't good!

To Reproduce
Steps to reproduce the behavior:
run ./bin/setup_docker nobuild

Expected behavior
Quepid fires up!

Variable ("knobs") values should be copied with clone

Describe the bug
The names of any custom variables are copied with "clone", but the values are not.

To Reproduce
Steps to reproduce the behavior:

  1. Create a case
  2. Create a test query with a variable token: (E.g. "q=id:##the_id_is_always_43##")
  3. Click on the "variables" and confirm that there is a field for "the_id_is_always_43"
  4. Set "the_id_is_always_43" to be 43.
  5. Rerun the query and confirm that it's using the variable.
  6. NOW... clone the case
  7. In the clone, examine the "variables" and ensure that "the_id_is_always_43" equals 43.
  8. If not, you have reproduced the bug.

Expected behavior
Variable values would be cloned with the case.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

Node Install Failing in Docker? ('Your distribution, identified as "buster", is not currently supported")

Not sure if this has something to do with my environment or somehow the build is not working correctly.

Describe the bug
Looks like the install/build of Node fails when running ./bin/setup_docker

Desktop (please complete the following information):

  • CentOS Linux release 7.6.1810 (Core)
  • Docker version 1.13.1, build b2f74b2/1.13.1

Expected behavior
Should just complete the docker setup.

To Reproduce

  • ./bin/setup_docker
  • Gets to this point and exits setup without completing.
  • apt-get install -y lsb-release > /dev/null 2>&1
  • '[' 'X lsb-release' '!=' X ']'
  • print_status 'Installing packages required for setup: lsb-release...'
  • echo
  • echo '## Installing packages required for setup: lsb-release...'
  • echo
  • exec_cmd 'apt-get install -y lsb-release > /dev/null 2>&1'
  • exec_cmd_nobail 'apt-get install -y lsb-release > /dev/null 2>&1'
  • echo '+ apt-get install -y lsb-release > /dev/null 2>&1'
  • bash -c 'apt-get install -y lsb-release > /dev/null 2>&1'
    ++ lsb_release -d
    ++ grep 'Ubuntu .*development'
    ++ echo 1
  • IS_PRERELEASE=1
  • [[ 1 -eq 0 ]]
    ++ lsb_release -c -s
  • DISTRO=buster
  • check_alt SolydXK solydxk-9 Debian stretch
  • '[' Xbuster == Xsolydxk-9 ']'

[... snip ...]

Confirming "buster" is supported...

Your distribution, identified as "buster", is not currently supported, please contact NodeSource at https://github.com/nodesource/distributions/issues if you think this is incorrect or would like your distribution to be considered for support

  • RC=60
  • [[ 60 != 0 ]]
  • print_status 'Your distribution, identified as "buster", is not currently supported, please contact NodeSource at https://github.com/nodesource/distributions/issues if you think this is incorrect or would like your distribution to be considered for support'
  • echo
  • echo '## Your distribution, identified as "buster", is not currently supported, please contact NodeSource at https://github.com/nodesource/distributions/issues if you think this is incorrect or would like your distribution to be considered for support'
  • echo
  • exit 1
    ERROR: Service 'app' failed to build: The command '/bin/sh -c curl -skL https://deb.nodesource.com/setup_8.x | bash -x -' returned a non-zero code: 1

ES TMDB dataset has invalid poster_path

Describe the bug
The ES TMDB data dump has poster path size of 135, but that was removed by TMDB, so we need to use a width of 185.

To Reproduce
Steps to reproduce the behavior:
1: Create TMDB test case.
2. Add thumb:poster_path
3. confirm that poster image shows up and isn't a broken image.

Explain Other on ES 6 and 7 Broken

Describe the bug
With ES 6 and 7, but not in ES 5, the explain Other function doesn't work.

To Reproduce
Steps to reproduce the behavior:

  1. Make a movie case. Search for Rambo.
  2. Click Explain Missing
  3. Search for title:rambo
  4. You will get results, but no explain in ES 6 and 7

Expected behavior
you should get explains!

Highlighting: Perceived relevance when judging queries

Is your feature request related to a problem? Please describe.
For deeply-indexed, long-form content search may match on terms which are not evident in short easy-to-scan fields (e.g.: title, summary, metadata).

Those these are often lower-quality matches, they still may wind up on a judgement list. And without the match context, they will be judged poorly.

In search applications with deep indexing, the conventional solution for perceived relevance is snippetting/highlighting.

At present, Quepid will execute a query containing highlighting instructions on Solr/ES without complaint. However the highlighting response is not parsed or presented back to the search listing.

Describe the solution you'd like
Parse [{ field:["snippet1", "snippet2"] }] pairs out of any highlighting payload present in the search response (Solr or Elasticsearch). Display highlights in line with requested fields

Describe alternatives you've considered
The current workaround is to open each document's full _source and visually scan the json to try and figure out what's going on. This is not sustainable for tech users (and impractical for business users).

More informative error message when deleting Custom Scorer

Is your feature request related to a problem? Please describe.
I need to delete some old Custom Scorers. When I delete them, the error message ties them to existing Cases, however I don't know which cases use which scorers, so I have to hunt through all of them!

Describe the solution you'd like
Tell me which case, user, or query the scorer is tied to, so I can investigate furthur.

Refactor doc_generator.rb to not use rsolr

doc_generator.rb is used to create ratings. It is the only aspect of Quepid that use RSolr, and since we arne't solr only, we work with elastic search and maybe others in the future, we need to seperate doc_generator from the search engine. A baby step is to convert to straight up API/JSON queries.

Describe the solution you'd like
Keep doc_generator the same, but use api and json, not rsolr.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Elasticsearch 7 returns total hits in different format that Elasticsearch 5 and 6

Describe the bug
Elasticsearch 7 returns total hits in different JSON format than ES 5 and 6, which leads to odd rendering in the GUI.

To Reproduce
Steps to reproduce the behavior:

  1. Create a sample case in Quepid, and use ES 7: http://quepid-elasticsearch.dev.o19s.com:9207/tmdb/_search
  2. Run a query, see the {"value":130,"relation":"eq"} Results text.
  3. Swap to ES 6 via http://quepid-elasticsearch.dev.o19s.com:9206/tmdb/_search
  4. Run same queryies, but the JSON will go away showing just 130 Results text

Expected behavior
Just show me the number!

Screenshots

Screenshot at Nov 05 22-59-22

REDIS_URL does not override value in .env

Describe the bug

Quepid 6.1.0

Docker Images should not have .env files
REDIS_URL does not override value in .env

To Reproduce
run quepid in production mode with REDIS_URL defined on docker-compose.yml.prod
quepid will still use value in .env

Expected behavior
REDIS_URL would override .env

Additional context
I had to override the start process and remove the .env first before starting foreman...

Named Query support for elasticsearch

Is your feature request related to a problem? Please describe.
Because Quepid does not support highlighting/snippeting, it is sometime tricky for raters to understand why a result matched with the same clarity they would in-app. "Deep matches", or rankings which seem "out-of-whack" rely on rater understanding of a Splainer explanation to decode. Here's what one of mine looks like:

Relevancy Score: 0.00007684284
0.00007684284 Product of following:
 0.03842142 Sum of the following:
   0.01921071 script score function, computed with script:"Script{type=inline, lang='painless', idOrCode='params.w * (Math.pow(_score, params.a) / ( Math.pow(params.k, params.a) + Math.pow(_score, params.a) ) )', options={}, params={a=1.5, w=0.02, k=1}}" and parameters: 
{a=1.5, w=0.02, k=1}

   0.01921071 script score function, computed with script:"Script{type=inline, lang='painless', idOrCode='params.w * (Math.pow(_score, params.a) / ( Math.pow(params.k, params.a) + Math.pow(_score, params.a) ) )', options={}, params={a=1.5, w=0.02, k=1}}" and parameters: 
{a=1.5, w=0.02, k=1}

 0.002 function score, score mode [sum]

(cough cough)

Describe the solution you'd like
Elasticsearch has a feature called "named queries" which allows search engineers to "name" each clause of boolean queries or filters and have the matching "names" returned with each hit.

This allows the search engineer to give meaningful names to "reasons why a document might match" which are then concisely returned without the spaghetti of an explain plan.

(Here's an example of what comes back with each hit payload)

matched_queries: ["entityFilter", "requiredTermsFilter", "catchallMatch", "editorialNudge"]

Sadly there is no support for this in Solr yet to my knowledge. But the utility should be clear for Elasticsearch users.

Describe alternatives you've considered
Support for highlighting/snippeting in Quepid.

Additional context
Article about using named queries: https://qbox.io/blog/elasticsearch-named-queries

Change default generated Elasticsearch query to something that works with > 6

Is your feature request related to a problem? Please describe.
The Query Sandbox section for Elasticsearch generates with the default query

{
  "query": {
    "match": {
      "_all": "#$query##"
    }
  }
}

Trying to use against Elasticsearch 6 doesn't work without changing the query due to the removal of the _all field.

Describe the solution you'd like
It would be nice if the default generated query worked off something like "multi_match" instead if the desire is to search across multiple fields. Or potentially specify a target ES version to start with a default query that works.

Additional context
Point of confusion when introducing the tool to new team members to play around with. When they generate a new relevancy case and it doesn't work immediately they get confused. The intention is usually to change the query anyway but seeming like it can't query the index at all from the get go causes some headache.

Collapse query results well from bottom (in addition to the top)

Is your feature request related to a problem? Please describe.
I most frequently enter judgments on a single query at a time. If multiple wells are open at once it's very easy to lose track of which query is being worked with so I close each query well once I'm done rating.

In order to close a query well, I need to scroll all the way back up to the query header (often 2-3 swipes), stop exactly on the header, and then click the collapse trigger (glyphicon-chevron-up)

Describe the solution you'd like
Currently the collapse trigger lives in the header (up top), but I tend to work my way down the results and then want to collapse the query once I've finished rating all the unrated. A second trigger (glyphicon-chevron-up) at the bottom of results (in the right side of the "Peek at..." div) would remove most of the excess scrolling from this task.

Describe alternatives you've considered (not exclusive)
Alternative 1: If the query header for the query-results in view scrolls up out of the frame, freeze and stack that div under the quepid-header-div allowing the results for the query in focus to scroll underneath it. This way the existing collapse trigger is always in view. (Actually probably a more usable solution... just requires more work)

Alternative 2: Automatically collapse once no "unrated" results remain.

Bonus 3: Keyboard shortcuts (right-arrow open well, left-arrow close well, up/down focus on next result-or-query which is visible above focus (last action).

Additional context
Accelerating judgment entry in Quepid reduces the overall time needed to build judgments. Since the OSC engagement model pushes judgments as a prerequisite, the effort of building judgment lists is likely a major brake on engagement with OSC. Faster human automatons == more work for OSC :-)

Do not attempt to highlight numeric / date fields

Describe the bug
I want to display a date of type PointDateField but it crashes the whole Quepid result page since the Solr query returns an error for the highlight part, due to highlighter not supporting PointDate field. It is not possible to override hl.fl to exclude the date field.

To Reproduce
Steps to reproduce the behavior:

  1. Try to include a pdate field with Standard Highlighter with Solr 7.7
  2. The results disappear and an error is shown
  3. When visiting the raw Solr result you see the document hits, but an error in place of highlighting section.

Expected behavior
Quepid should not attempt to highlight date or numeric fields. Alternatively it should be possible to specify what fields to highlight and not.

Unable to clone a case without full history

Describe the bug
I cannot clone a Case with only one try. If I clone with full try history it succeeds.

To Reproduce
Steps to reproduce the behavior:

  1. Click 'clone' on a case
  2. Select one of the tries and hit 'clone'
  3. An error msg "Unable to clone you case. Please try again." shows up
  4. Try again, but this time select 'Include the entire try history'
  5. This time the clone succeeds

Expected behavior
Should be able to clone one try only (and ideally the dropdown should show the newest try on top)

Additional context
I have started Quepid about one week ago from docker-compose

Notes for a query disappears after collapse+expand query

Describe the bug
Notes on a query are lost when you collapse then expand the query. But after a full page refresh they come back again.

To Reproduce
Steps to reproduce the behavior:

  1. Add a note to a query
  2. Collapse by clicking on the query line
  3. Expand again and toggle Notes box
  4. The note you wrote is gone
  5. Refresh your browser page
  6. Open up the query and the Notes panel again - the note is there

You can duplicate queries when you use the "one;two;three" pattern in add query

I just learned you can add multiple queries at once (https://github.com/o19s/quepid/wiki/Tips-for-working-with-Quepid).

This led me to discover you can also get duplicate queries becasue the bulk querie creator doesn't check to see if a query already exists before inserting it.

So if you add `star trek;star wars' twice, you get dups!

Expected behavior
Don't allow duplicates.
I'd also like, when I add a duplicate, to instead of have the message "Query added", say "Query already exists" in the GUI.

Screenshots
Screenshot at Dec 20 15-50-42

Additional context
There may be a usecase for bulk loaidng without checking, but it seems like a very dangerous "advanced" feature

First load of Quepid site, can't export a Case

Describe the bug
The very first page in Quepid with a case, if you go to export the case, then we see errors at https://github.com/o19s/quepid/blob/master/app/assets/javascripts/components/export_entire_case/export_entire_case_controller.js#L39

I believe that this method isn't being called: https://github.com/o19s/quepid/blob/master/app/assets/javascripts/components/export_entire_case/export_entire_case_controller.js#L26

To Reproduce
Steps to reproduce the behavior:

  1. Go to http://www.quepid.com
  2. Log in and end up on the Case page.
  3. Click Export Case and pick General.
  4. You will see it doesn't happy, and you get an undefined error in the Console.

However, if you go to the Case Drop down and pick your default case, then you can click the Export Case option.
Expected behavior
Click Export Case, and get an export.

Docs rated using Explain Other get lost.

Is your feature request related to a problem? Please describe.
Coming out of #77 and #78, the explainOther is a powerful tool but you may not know what other docs you have rated unless you know the query to use. There is no "Show me all rated docs" feature.

Describe the solution you'd like
In the Explain Missing Documents, it would be nice to click a toggle and see all the missing documents that are rated. That way you know if a doc was rated or not, or if there are ratings that need to be cleared out.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.