Git Product home page Git Product logo

typesense / typesense Goto Github PK

View Code? Open in Web Editor NEW
18.0K 117.0 545.0 10.26 MB

Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences

Home Page: https://typesense.org

License: GNU General Public License v3.0

CMake 0.56% Shell 0.21% C++ 89.17% Dockerfile 0.26% C 9.49% Starlark 0.32%
search-engine search typo-tolerance site-search instantsearch fuzzy-search full-text-search enterprise-search synonyms faceting

typesense's Introduction

Typesense

Typesense is a fast, typo-tolerant search engine for building delightful search experiences.

An Open Source Algolia Alternative &
An Easier-to-Use ElasticSearch Alternative


Website | Documentation | Roadmap | Slack Community | Community Threads | Twitter


Typesense Demo

✨ Here are a couple of live demos that show Typesense in action on large datasets:

🗣️ 🎥 If you prefer watching videos:

Quick Links

Features

  • Typo Tolerance: Handles typographical errors elegantly, out-of-the-box.
  • Simple and Delightful: Simple to set-up, integrate with, operate and scale.
  • ⚡ Blazing Fast: Built in C++. Meticulously architected from the ground-up for low-latency (<50ms) instant searches.
  • Tunable Ranking: Easy to tailor your search results to perfection.
  • Sorting: Dynamically sort results based on a particular field at query time (helpful for features like "Sort by Price (asc)").
  • Faceting & Filtering: Drill down and refine results.
  • Grouping & Distinct: Group similar results together to show more variety.
  • Federated Search: Search across multiple collections (indices) in a single HTTP request.
  • Geo Search: Search and sort by results around a latitude/longitude or within a bounding box.
  • Vector Search: Index embeddings from your machine learning models in Typesense and do a nearest-neighbor search. Can be used to build similarity search, semantic search, visual search, recommendations, etc.
  • Semantic / Hybrid Search: Automatically generate embeddings from within Typesense using built-in models like S-BERT, E-5, etc or use OpenAI, PaLM API, etc, for both queries and indexed data. This allows you to send JSON data into Typesense and build an out-of-the-box semantic search + keyword search experience.
  • Conversational Search (Built-in RAG): Send questions to Typesense and have the response be a fully-formed sentence, based on the data you've indexed in Typesense. Think ChatGPT, but over your own data.
  • Image Search: Search through images using text descriptions of their contents, or perform similarity searches, using the CLIP model.
  • Voice Search: Capture and send query via voice recordings - Typesense will transcribe (via Whisper model) and provide search results.
  • Scoped API Keys: Generate API keys that only allow access to certain records, for multi-tenant applications.
  • JOINs: Connect one or more collections via common reference fields and join them during query time. This allows you to model SQL-like relationships elegantly.
  • Synonyms: Define words as equivalents of each other, so searching for a word will also return results for the synonyms defined.
  • Curation & Merchandizing: Boost particular records to a fixed position in the search results, to feature them.
  • Raft-based Clustering: Setup a distributed cluster that is highly available.
  • Seamless Version Upgrades: As new versions of Typesense come out, upgrading is as simple as swapping out the binary and restarting Typesense.
  • No Runtime Dependencies: Typesense is a single binary that you can run locally or in production with a single command.

Don't see a feature on this list? Search our issue tracker if someone has already requested it and add a comment to it explaining your use-case, or open a new issue if not. We prioritize our roadmap based on user feedback, so we'd love to hear from you.

Roadmap

Here's Typesense's public roadmap: https://github.com/orgs/typesense/projects/1.

The first column also explains how we prioritize features, how you can influence prioritization and our release cadence.

Benchmarks

  • A dataset containing 2.2 Million recipes (recipe names and ingredients):
    • Took up about 900MB of RAM when indexed in Typesense
    • Took 3.6mins to index all 2.2M records
    • On a server with 4vCPUs, Typesense was able to handle a concurrency of 104 concurrent search queries per second, with an average search processing time of 11ms.
  • A dataset containing 28 Million books (book titles, authors and categories):
    • Took up about 14GB of RAM when indexed in Typesense
    • Took 78mins to index all 28M records
    • On a server with 4vCPUs, Typesense was able to handle a concurrency of 46 concurrent search queries per second, with an average search processing time of 28ms.
  • With a dataset containing 3 Million products (Amazon product data), Typesense was able to handle a throughput of 250 concurrent search queries per second on an 8-vCPU 3-node Highly Available Typesense cluster.

We'd love to benchmark with larger datasets, if we can find large ones in the public domain. If you have any suggestions for structured datasets that are open, please let us know by opening an issue. We'd also be delighted if you're able to share benchmarks from your own large datasets. Please send us a PR!

Who's using this?

Typesense is used by a range of users across different domains and verticals.

On Typesense Cloud we serve more than 3 BILLION searches per month. Typesense's Docker images have been downloaded over 12M times.

We've recently started documenting who's using it in our Showcase. If you'd like to be included in the list, please feel free to edit SHOWCASE.md and send us a PR.

You'll also see a list of user logos on the Typesense Cloud home page.

Install

Option 1: You can download the binary packages that we publish for Linux (x86_64 & arm64) and Mac (x86_64).

Option 2: You can also run Typesense from our official Docker image.

Option 3: Spin up a managed cluster with Typesense Cloud:

Deploy with Typesense Cloud

Quick Start

Here's a quick example showcasing how you can create a collection, index a document and search it on Typesense.

Let's begin by starting the Typesense server via Docker:

docker run -p 8108:8108 -v/tmp/data:/data typesense/typesense:26.0 --data-dir /data --api-key=Hu52dwsas2AdxdE

We have API Clients in a couple of languages, but let's use the Python client for this example.

Install the Python client for Typesense:

pip install typesense

We can now initialize the client and create a companies collection:

import typesense

client = typesense.Client({
  'api_key': 'Hu52dwsas2AdxdE',
  'nodes': [{
    'host': 'localhost',
    'port': '8108',
    'protocol': 'http'
  }],
  'connection_timeout_seconds': 2
})

create_response = client.collections.create({
  "name": "companies",
  "fields": [
    {"name": "company_name", "type": "string" },
    {"name": "num_employees", "type": "int32" },
    {"name": "country", "type": "string", "facet": True }
  ],
  "default_sorting_field": "num_employees"
})

Now, let's add a document to the collection we just created:

document = {
 "id": "124",
 "company_name": "Stark Industries",
 "num_employees": 5215,
 "country": "USA"
}

client.collections['companies'].documents.create(document)

Finally, let's search for the document we just indexed:

search_parameters = {
  'q'         : 'stork',
  'query_by'  : 'company_name',
  'filter_by' : 'num_employees:>100',
  'sort_by'   : 'num_employees:desc'
}

client.collections['companies'].documents.search(search_parameters)

Did you notice the typo in the query text? No big deal. Typesense handles typographic errors out-of-the-box!

Step-by-step Walk-through

A step-by-step walk-through is available on our website here.

This will guide you through the process of starting up a Typesense server, indexing data in it and querying the data set.

API Documentation

Here's our official API documentation, available on our website: https://typesense.org/api.

If you notice any issues with the documentation or walk-through, please let us know or send us a PR here: https://github.com/typesense/typesense-website.

API Clients

While you can definitely use CURL to interact with Typesense Server directly, we offer official API clients to simplify using Typesense from your language of choice. The API Clients come built-in with a smart retry strategy to ensure that API calls made via them are resilient, especially in an HA setup.

If we don't offer an API client in your language, you can still use any popular HTTP client library to access Typesense's APIs directly.

Here are some community-contributed clients and integrations:

We welcome community contributions to add more official client libraries and integrations. Please reach out to us at [email protected] or open an issue on GitHub to collaborate with us on the architecture. 🙏

Framework Integrations

We also have the following framework integrations:

Postman Collection

We have a community-maintained Postman Collection here: https://github.com/typesense/postman.

Postman is an app that let's you perform HTTP requests by pointing and clicking, instead of having to type them out in the terminal. The Postman Collection above gives you template requests that you can import into Postman, to quickly make API calls to Typesense.

Search UI Components

You can use our InstantSearch.js adapter to quickly build powerful search experiences, complete with filtering, sorting, pagination and more.

Here's how: https://typesense.org/docs/guide/search-ui-components.html

FAQ

How does this differ from Elasticsearch?

Elasticsearch is a large piece of software, that takes non-trivial amount of effort to setup, administer, scale and fine-tune. It offers you a few thousand configuration parameters to get to your ideal configuration. So it's better suited for large teams who have the bandwidth to get it production-ready, regularly monitor it and scale it, especially when they have a need to store billions of documents and petabytes of data (eg: logs).

Typesense is built specifically for decreasing the "time to market" for a delightful search experience. It's a light-weight yet powerful & scaleable alternative that focuses on Developer Happiness and Experience with a clean well-documented API, clear semantics and smart defaults so it just works well out-of-the-box, without you having to turn many knobs.

Elasticsearch also runs on the JVM, which by itself can be quite an effort to tune to run optimally. Typesense, on the other hand, is a single light-weight self-contained native binary, so it's simple to setup and operate.

See a side-by-side feature comparison here.

How does this differ from Algolia?

Algolia is a proprietary, hosted, search-as-a-service product that works well, when cost is not an issue. From our experience, fast growing sites and apps quickly run into search & indexing limits, accompanied by expensive plan upgrades as they scale.

Typesense on the other hand is an open-source product that you can run on your own infrastructure or use our managed SaaS offering - Typesense Cloud. The open source version is free to use (besides of course your own infra costs). With Typesense Cloud we don't charge by records or search operations. Instead, you get a dedicated cluster and you can throw as much data and traffic at it as it can handle. You only pay a fixed hourly cost & bandwidth charges for it, depending on the configuration your choose, similar to most modern cloud platforms.

From a product perspective, Typesense is closer in spirit to Algolia than Elasticsearch. However, we've addressed some important limitations with Algolia:

Algolia requires separate indices for each sort order, which counts towards your plan limits. Most of the index settings like fields to search, fields to facet, fields to group by, ranking settings, etc are defined upfront when the index is created vs being able to set them on the fly at query time.

With Typesense, these settings can be configured at search time via query parameters which makes it very flexible and unlocks new use cases. Typesense is also able to give you sorted results with a single index, vs having to create multiple. This helps reduce memory consumption.

Algolia offers the following features that Typesense does not have currently: personalization & server-based search analytics. For analytics, you can still instrument your search on the client-side and send search metrics to your web analytics tool of choice.

We intend to bridge this gap in Typesense, but in the meantime, please let us know if any of these are a show stopper for your use case by creating a feature request in our issue tracker.

See a side-by-side feature comparison here.

Speed is great, but what about the memory footprint?

A fresh Typesense server will consume about 30 MB of memory. As you start indexing documents, the memory use will increase correspondingly. How much it increases depends on the number and type of fields you index.

We've strived to keep the in-memory data structures lean. To give you a rough idea: when 1 million Hacker News titles are indexed along with their points, Typesense consumes 165 MB of memory. The same size of that data on disk in JSON format is 88 MB. If you have any numbers from your own datasets that we can add to this section, please send us a PR!

Why the GPL license?

From our experience companies are generally concerned when libraries they use are GPL licensed, since library code is directly integrated into their code and will lead to derivative work and trigger GPL compliance. However, Typesense Server is server software and we expect users to typically run it as a separate daemon, and not integrate it with their own code. GPL covers and allows for this use case generously (eg: Linux is GPL licensed). Now, AGPL is what makes server software accessed over a network result in derivative work and not GPL. And for that reason we’ve opted to not use AGPL for Typesense.

Now, if someone makes modifications to Typesense server, GPL actually allows you to still keep the modifications to yourself as long as you don't distribute the modified code. So a company can for example modify Typesense server and run the modified code internally and still not have to open source their modifications, as long as they make the modified code available to everyone who has access to the modified software.

Now, if someone makes modifications to Typesense server and distributes the modifications, that's where GPL kicks in. Given that we’ve published our work to the community, we'd like for others' modifications to also be made open to the community in the spirit of open source. We use GPL for this purpose. Other licenses would allow our open source work to be modified, made closed source and distributed, which we want to avoid with Typesense for the project’s long term sustainability.

Here's more background on why GPL, as described by Discourse: https://meta.discourse.org/t/why-gnu-license/2531. Many of the points mentioned there resonate with us.

Now, all of the above only apply to Typesense Server. Our client libraries are indeed meant to be integrated into our users’ code and so they use Apache license.

So in summary, AGPL is what is usually problematic for server software and we’ve opted not to use it. We believe GPL for Typesense Server captures the essence of what we want for this open source project. GPL has a long history of successfully being used by popular open source projects. Our libraries are still Apache licensed.

If you have specifics that prevent you from using Typesense due to a licensing issue, we're happy to explore this topic further with you. Please reach out to us.

Support

👋 🌐 If you have general questions about Typesense, want to say hello or just follow along, we'd like to invite you to join our public Slack Community.

If you run into any problems or issues, please create a GitHub issue and we'll try our best to help.

We strive to provide good support through our issue trackers on GitHub. However, if you'd like to receive private & prioritized support with:

  • Guaranteed SLAs
  • Phone / video calls to discuss your specific use case and get recommendations on best practices
  • Private discussions over Slack
  • Guidance around scaling best practices
  • Prioritized feature requests

We offer Paid Support options described here.

Contributing

We are a lean team on a mission to democratize search and we'll take all the help we can get! If you'd like to get involved, here's information on where we could use your help: Contributing.md

Getting Latest Updates

If you'd like to get updates when we release new versions, click on the "Watch" button on the top and select "Releases only". GitHub will then send you notifications along with a changelog with each new release.

We also post updates to our Twitter account about releases and additional topics related to Typesense. Follow us here: @typesense.

👋 🌐 We'll also post updates on our Slack Community.

Build from source

We use Bazel to build Typesense.

Typesense requires the following dependencies:

  • C++11 compatible compiler (GCC >= 4.9.0, Apple Clang >= 8.0, Clang >= 3.9.0)
  • Snappy
  • zlib
  • OpenSSL (>=1.0.2)
  • curl
  • ICU

Please refer to the CI build steps for the latest set of dependencies.

Once you've installed them, run the following from the root of the repo:

bazel build //:typesense-server

The first build will take some time since other third-party libraries are pulled and built as part of the build process.


© 2016-present Typesense Inc.

typesense's People

Contributors

0x2adr1 avatar 0xflotus avatar alexambarch avatar alexjball avatar alphatownsman avatar artt avatar baltpeter avatar bradenmacdonald avatar brianweet avatar coderiekelt avatar davidpaulsson avatar ekon97 avatar furnnl avatar happy-san avatar harisarang avatar jasonbosco avatar joeirimpan avatar kishorenc avatar krunal1313 avatar maximevalette avatar mihirpaldhikar avatar orasik avatar ozanarmagan avatar rayjasson98 avatar redsnail avatar skipjack avatar sunny avatar the-alchemist avatar tpayne84 avatar vegarsti avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

typesense's Issues

Demos!

@jasonbosco Let's brainstorm on a few useful demos that would showcase the power and simplicity of Typesense. Here are a few themes we should explore:

  • Simple text search (like Google news search) with sort by both timestamp and relevancy
  • Faceted search (like Shopping engines)
  • Auto complete (typeahead use case)

Documentation for Running In-Process?

Running In-Process

Hello, I have a dataset that is roughly 1.5 million documents. Today, this dataset is stored in a SQL database and is accessed through a single web server. Would typesense be a good candidate for in-process search on this dataset? If so, is there documentation on how to do this?

AND query mode

Description

I notice that querying seems to occur using an OR operator across query terms. I'd love the option to make it AND.

Steps to reproduce

Using the website books demo, search 'harry august'

Expected Behavior

The only result should be 'The First Fifteen Lives of Harry August`

Actual Behavior

'The First Fifteen Lives of Harry August' is the first result returned, but it also returns books like 'Harry Potter'. Presumably this is because it matches on any query token (e.g. OR) instead of all query tokens (e.g. AND).

Return 404 when a non-existent field is searched for

Description

Currently, when you search for a field that doesn't exist in the schema, the API returns an HTTP 400: Bad Request with the message as Could not find a field named X in the schema.

It also returns HTTP 400 when bad JSON is passed in the request for example.

I think a non-existent field should return a 404, with the same message.

Steps to reproduce

$ curl -X POST "http://localhost:8108/collections" -H  "accept: application/json" -H  "X-TYPESENSE-API-KEY: abcd" -H  "Content-Type: application/json" -d "{  \"name\": \"companies\",  \"fields\": [    {      \"name\": \"company_name\",      \"type\": \"string\",      \"facet\": false    },    {      \"name\": \"num_employees\",      \"type\": \"int32\",      \"facet\": false    },    {      \"name\": \"country\",      \"type\": \"string\",      \"facet\": true    }  ],  \"token_ranking_field\": \"num_employees\"}"

$ curl -X POST "http://localhost:8108/collections/companies/documents" -H  "accept: application/json" -H  "X-TYPESENSE-API-KEY: abcd" -H  "Content-Type: application/json" -d "{  \"id\": \"124\",  \"company_name\": \"Stark Industries\",  \"num_employees\": 5215,  \"country\": \"USA\",  \"notinschema\": \"abc\"}"

$ curl -X GET "http://localhost:8108/collections/companies/documents/search?q=blah&query_by=hello" -H  "accept: application/json" -H  "X-TYPESENSE-API-KEY: abcd"

Expected Behavior

Return HTTP 404

{
  "message": "Could not find a field named `hello` in the schema."
}

Actual Behavior

Returns HTTP 400

{
  "message": "Could not find a field named `hello` in the schema."
}

Metadata

Typsense Version: 0.8-api-changes

Trouble building Mac binary from source

Description

Putting together a PR for #20

I'm trying to build from source on Mac High Sierra (10.13.3). Running into trouble when running:

./build.sh --create-binary --clean --depclean

I get:

++ dirname ./build.sh
++ read a
++ cd .
++ pwd
++ break
+ PROJECT_DIR=/Users/cameron/code/typesense
++ uname -s
+ SYSTEM_NAME=Darwin
+ '[' -z '' ']'
+ TYPESENSE_VERSION=nightly
+ [[ --create-binary --clean --depclean == *\-\-\c\l\e\a\n* ]]
+ echo Cleaning...
Cleaning...
+ rm -rf /Users/cameron/code/typesense/build
+ mkdir /Users/cameron/code/typesense/build
+ [[ --create-binary --clean --depclean == *\-\-\d\e\p\c\l\e\a\n* ]]
+ echo 'Cleaning dependencies...'
Cleaning dependencies...
+ rm -rf /Users/cameron/code/typesense/external-Darwin
+ mkdir /Users/cameron/code/typesense/external-Darwin
+ cmake -DTYPESENSE_VERSION=nightly -DCMAKE_BUILD_TYPE=Release -H/Users/cameron/code/typesense -B/Users/cameron/code/typesense/build
-- The C compiler identification is AppleClang 9.1.0.9020039
-- The CXX compiler identification is AppleClang 9.1.0.9020039
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found OpenSSL: /usr/local/opt/openssl/lib/libcrypto.a (found suitable version "1.0.2o", minimum required is "1.0.2") 
-- Found Snappy: /usr/local/opt/snappy/lib/libsnappy.a  
-- Found ZLIB: /usr/local/opt/zlib/lib/libz.a (found version "1.2.11") 
-- Found CURL: /usr/local/opt/curl/lib/libcurl.a (found version "7.59.0") 
-- Found ICU header files in /usr/local/opt/icu4c/include
-- Found ICU libraries: /usr/local/opt/icu4c/lib/libicuuc.a
-- Downloading libfor...
-- Extracting libfor...
Building libfor locally...
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -pedantic -Wall -Wextra -O3 -c for.c
ar rvs libfor.a for.o
ar: creating archive libfor.a
a - for.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -pedantic -Wall -Wextra -O3 -o benchmark benchmark.c libfor.a
benchmark.c:52:3: warning: logical not is only applied to the left hand side of this comparison [-Wlogical-not-parentheses]
  VERIFY(s1 == s2);
  ^         ~~
benchmark.c:21:30: note: expanded from macro 'VERIFY'
#define VERIFY(c)     while (!c) {                                          \
                             ^~
benchmark.c:52:3: note: add parentheses after the '!' to evaluate the comparison first
benchmark.c:21:30: note: expanded from macro 'VERIFY'
#define VERIFY(c)     while (!c) {                                          \
                             ^
benchmark.c:52:3: note: add parentheses around left hand side expression to silence this warning
benchmark.c:21:30: note: expanded from macro 'VERIFY'
#define VERIFY(c)     while (!c) {                                          \
                             ^
benchmark.c:53:3: warning: logical not is only applied to the left hand side of this comparison [-Wlogical-not-parentheses]
  VERIFY(s2 == s3);
  ^         ~~
benchmark.c:21:30: note: expanded from macro 'VERIFY'
#define VERIFY(c)     while (!c) {                                          \
                             ^~
benchmark.c:53:3: note: add parentheses after the '!' to evaluate the comparison first
benchmark.c:21:30: note: expanded from macro 'VERIFY'
#define VERIFY(c)     while (!c) {                                          \
                             ^
benchmark.c:53:3: note: add parentheses around left hand side expression to silence this warning
benchmark.c:21:30: note: expanded from macro 'VERIFY'
#define VERIFY(c)     while (!c) {                                          \
                             ^
2 warnings generated.
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -pedantic -Wall -Wextra -O3 -o test test.c  libfor.a
echo "please run unit tests by running ./test"
please run unit tests by running ./test
-- Downloading h2o-2.2.4...
-- Extracting h2o-2.2.4...
Configuring h2o-2.2.4...
-- The C compiler identification is AppleClang 9.1.0.9020039
-- The CXX compiler identification is AppleClang 9.1.0.9020039
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found PkgConfig: /usr/local/bin/pkg-config (found version "0.29.2") 
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - found
-- Found Threads: TRUE  
-- Found OpenSSL: /usr/local/opt/openssl/lib/libcrypto.dylib (found version "1.0.2o") 
-- Found ZLIB: /usr/lib/libz.dylib (found version "1.2.11") 
-- Performing Test ARCH_SUPPORTS_64BIT_ATOMICS
-- Performing Test ARCH_SUPPORTS_64BIT_ATOMICS - Success
-- Checking for module 'libuv>=1.0.0'
--   No package 'libuv' found
-- Could NOT find LIBUV (missing: LIBUV_LIBRARIES LIBUV_INCLUDE_DIR) 
-- Checking for module 'libwslay'
--   No package 'libwslay' found
-- Could NOT find WSLAY (missing: WSLAY_LIBRARIES WSLAY_INCLUDE_DIR) 
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/cameron/code/typesense/external-Darwin/h2o-2.2.4/build
Building h2o-2.2.4 locally...
Scanning dependencies of target h2o
[  1%] Building C object CMakeFiles/h2o.dir/deps/cloexec/cloexec.c.o
[  1%] Building C object CMakeFiles/h2o.dir/deps/libgkc/gkc.c.o
[  1%] Building C object CMakeFiles/h2o.dir/deps/libyrmcds/close.c.o
[  1%] Building C object CMakeFiles/h2o.dir/deps/libyrmcds/connect.c.o
[  3%] Building C object CMakeFiles/h2o.dir/deps/libyrmcds/recv.c.o
[  3%] Building C object CMakeFiles/h2o.dir/deps/libyrmcds/send.c.o
[  3%] Building C object CMakeFiles/h2o.dir/deps/libyrmcds/send_text.c.o
[  3%] Building C object CMakeFiles/h2o.dir/deps/libyrmcds/socket.c.o
[  5%] Building C object CMakeFiles/h2o.dir/deps/libyrmcds/strerror.c.o
[  5%] Building C object CMakeFiles/h2o.dir/deps/libyrmcds/text_mode.c.o
[  5%] Building C object CMakeFiles/h2o.dir/deps/picohttpparser/picohttpparser.c.o
[  5%] Building C object CMakeFiles/h2o.dir/lib/common/cache.c.o
[  7%] Building C object CMakeFiles/h2o.dir/lib/common/file.c.o
[  7%] Building C object CMakeFiles/h2o.dir/lib/common/filecache.c.o
[  7%] Building C object CMakeFiles/h2o.dir/lib/common/hostinfo.c.o
[  7%] Building C object CMakeFiles/h2o.dir/lib/common/http1client.c.o
[  9%] Building C object CMakeFiles/h2o.dir/lib/common/memcached.c.o
[  9%] Building C object CMakeFiles/h2o.dir/lib/common/memory.c.o
[  9%] Building C object CMakeFiles/h2o.dir/lib/common/multithread.c.o
[  9%] Building C object CMakeFiles/h2o.dir/lib/common/serverutil.c.o
[ 11%] Building C object CMakeFiles/h2o.dir/lib/common/socket.c.o
[ 11%] Building C object CMakeFiles/h2o.dir/lib/common/socketpool.c.o
[ 11%] Building C object CMakeFiles/h2o.dir/lib/common/string.c.o
[ 11%] Building C object CMakeFiles/h2o.dir/lib/common/time.c.o
[ 13%] Building C object CMakeFiles/h2o.dir/lib/common/timeout.c.o
[ 13%] Building C object CMakeFiles/h2o.dir/lib/common/url.c.o
[ 13%] Building C object CMakeFiles/h2o.dir/lib/core/config.c.o
[ 13%] Building C object CMakeFiles/h2o.dir/lib/core/configurator.c.o
[ 15%] Building C object CMakeFiles/h2o.dir/lib/core/context.c.o
[ 15%] Building C object CMakeFiles/h2o.dir/lib/core/headers.c.o
[ 15%] Building C object CMakeFiles/h2o.dir/lib/core/logconf.c.o
[ 15%] Building C object CMakeFiles/h2o.dir/lib/core/proxy.c.o
[ 17%] Building C object CMakeFiles/h2o.dir/lib/core/request.c.o
[ 17%] Building C object CMakeFiles/h2o.dir/lib/core/token.c.o
[ 17%] Building C object CMakeFiles/h2o.dir/lib/core/util.c.o
[ 17%] Building C object CMakeFiles/h2o.dir/lib/handler/access_log.c.o
[ 19%] Building C object CMakeFiles/h2o.dir/lib/handler/chunked.c.o
[ 19%] Building C object CMakeFiles/h2o.dir/lib/handler/compress.c.o
[ 19%] Building C object CMakeFiles/h2o.dir/lib/handler/compress/gzip.c.o
[ 19%] Building C object CMakeFiles/h2o.dir/lib/handler/errordoc.c.o
[ 21%] Building C object CMakeFiles/h2o.dir/lib/handler/expires.c.o
[ 21%] Building C object CMakeFiles/h2o.dir/lib/handler/fastcgi.c.o
[ 21%] Building C object CMakeFiles/h2o.dir/lib/handler/file.c.o
[ 21%] Building C object CMakeFiles/h2o.dir/lib/handler/headers.c.o
[ 23%] Building C object CMakeFiles/h2o.dir/lib/handler/mimemap.c.o
[ 23%] Building C object CMakeFiles/h2o.dir/lib/handler/proxy.c.o
[ 23%] Building C object CMakeFiles/h2o.dir/lib/handler/redirect.c.o
[ 23%] Building C object CMakeFiles/h2o.dir/lib/handler/reproxy.c.o
[ 25%] Building C object CMakeFiles/h2o.dir/lib/handler/throttle_resp.c.o
[ 25%] Building C object CMakeFiles/h2o.dir/lib/handler/status.c.o
[ 25%] Building C object CMakeFiles/h2o.dir/lib/handler/headers_util.c.o
[ 25%] Building C object CMakeFiles/h2o.dir/lib/handler/status/events.c.o
[ 27%] Building C object CMakeFiles/h2o.dir/lib/handler/status/requests.c.o
[ 27%] Building C object CMakeFiles/h2o.dir/lib/handler/http2_debug_state.c.o
[ 27%] Building C object CMakeFiles/h2o.dir/lib/handler/status/durations.c.o
[ 27%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/access_log.c.o
[ 29%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/compress.c.o
[ 29%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/errordoc.c.o
[ 29%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/expires.c.o
[ 29%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/fastcgi.c.o
[ 31%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/file.c.o
[ 31%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/headers.c.o
[ 31%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/proxy.c.o
[ 31%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/redirect.c.o
[ 33%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/reproxy.c.o
[ 33%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/throttle_resp.c.o
[ 33%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/status.c.o
[ 33%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/http2_debug_state.c.o
[ 35%] Building C object CMakeFiles/h2o.dir/lib/handler/configurator/headers_util.c.o
[ 35%] Building C object CMakeFiles/h2o.dir/lib/http1.c.o
[ 35%] Building C object CMakeFiles/h2o.dir/lib/tunnel.c.o
[ 35%] Building C object CMakeFiles/h2o.dir/lib/http2/cache_digests.c.o
[ 37%] Building C object CMakeFiles/h2o.dir/lib/http2/casper.c.o
[ 37%] Building C object CMakeFiles/h2o.dir/lib/http2/connection.c.o
[ 37%] Building C object CMakeFiles/h2o.dir/lib/http2/frame.c.o
[ 37%] Building C object CMakeFiles/h2o.dir/lib/http2/hpack.c.o
[ 39%] Building C object CMakeFiles/h2o.dir/lib/http2/scheduler.c.o
[ 39%] Building C object CMakeFiles/h2o.dir/lib/http2/stream.c.o
[ 39%] Building C object CMakeFiles/h2o.dir/lib/http2/http2_debug_state.c.o
[ 39%] Building C object CMakeFiles/h2o.dir/deps/yaml/src/api.c.o
[ 41%] Building C object CMakeFiles/h2o.dir/deps/yaml/src/dumper.c.o
[ 41%] Building C object CMakeFiles/h2o.dir/deps/yaml/src/emitter.c.o
[ 41%] Building C object CMakeFiles/h2o.dir/deps/yaml/src/loader.c.o
[ 41%] Building C object CMakeFiles/h2o.dir/deps/yaml/src/parser.c.o
[ 43%] Building C object CMakeFiles/h2o.dir/deps/yaml/src/reader.c.o
[ 43%] Building C object CMakeFiles/h2o.dir/deps/yaml/src/scanner.c.o
[ 43%] Building C object CMakeFiles/h2o.dir/deps/yaml/src/writer.c.o
[ 43%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/backward_references.cc.o
[ 45%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/block_splitter.cc.o
[ 45%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/brotli_bit_stream.cc.o
[ 45%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/compress_fragment.cc.o
[ 45%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/compress_fragment_two_pass.cc.o
[ 47%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/dictionary.cc.o
[ 47%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/encode.cc.o
[ 47%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/entropy_encode.cc.o
[ 47%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/histogram.cc.o
[ 49%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/literal_cost.cc.o
[ 49%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/metablock.cc.o
[ 49%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/static_dict.cc.o
[ 49%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/streams.cc.o
[ 50%] Building CXX object CMakeFiles/h2o.dir/deps/brotli/enc/utf8_util.cc.o
[ 50%] Building CXX object CMakeFiles/h2o.dir/lib/handler/compress/brotli.cc.o
[ 50%] Building C object CMakeFiles/h2o.dir/deps/neverbleed/neverbleed.c.o
[ 50%] Building C object CMakeFiles/h2o.dir/src/main.c.o
[ 52%] Building C object CMakeFiles/h2o.dir/src/ssl.c.o
[ 52%] Building C object CMakeFiles/h2o.dir/deps/picotls/deps/micro-ecc/uECC.c.o
[ 52%] Building C object CMakeFiles/h2o.dir/deps/picotls/deps/cifra/src/aes.c.o
[ 52%] Building C object CMakeFiles/h2o.dir/deps/picotls/deps/cifra/src/blockwise.c.o
[ 54%] Building C object CMakeFiles/h2o.dir/deps/picotls/deps/cifra/src/chash.c.o
[ 54%] Building C object CMakeFiles/h2o.dir/deps/picotls/deps/cifra/src/curve25519.c.o
[ 54%] Building C object CMakeFiles/h2o.dir/deps/picotls/deps/cifra/src/drbg.c.o
[ 54%] Building C object CMakeFiles/h2o.dir/deps/picotls/deps/cifra/src/hmac.c.o
[ 56%] Building C object CMakeFiles/h2o.dir/deps/picotls/deps/cifra/src/gcm.c.o
[ 56%] Building C object CMakeFiles/h2o.dir/deps/picotls/deps/cifra/src/gf128.c.o
[ 56%] Building C object CMakeFiles/h2o.dir/deps/picotls/deps/cifra/src/modes.c.o
[ 56%] Building C object CMakeFiles/h2o.dir/deps/picotls/deps/cifra/src/sha256.c.o
[ 58%] Building C object CMakeFiles/h2o.dir/deps/picotls/lib/picotls.c.o
[ 58%] Building C object CMakeFiles/h2o.dir/deps/picotls/lib/cifra.c.o
[ 58%] Building C object CMakeFiles/h2o.dir/deps/picotls/lib/uecc.c.o
[ 58%] Building C object CMakeFiles/h2o.dir/deps/picotls/lib/openssl.c.o
[ 60%] Linking CXX executable h2o
[ 60%] Built target h2o
Scanning dependencies of target libh2o-evloop
[ 60%] Building C object CMakeFiles/libh2o-evloop.dir/deps/cloexec/cloexec.c.o
[ 60%] Building C object CMakeFiles/libh2o-evloop.dir/deps/libgkc/gkc.c.o
[ 60%] Building C object CMakeFiles/libh2o-evloop.dir/deps/libyrmcds/close.c.o
[ 62%] Building C object CMakeFiles/libh2o-evloop.dir/deps/libyrmcds/connect.c.o
[ 62%] Building C object CMakeFiles/libh2o-evloop.dir/deps/libyrmcds/recv.c.o
[ 62%] Building C object CMakeFiles/libh2o-evloop.dir/deps/libyrmcds/send.c.o
[ 62%] Building C object CMakeFiles/libh2o-evloop.dir/deps/libyrmcds/send_text.c.o
[ 64%] Building C object CMakeFiles/libh2o-evloop.dir/deps/libyrmcds/socket.c.o
[ 64%] Building C object CMakeFiles/libh2o-evloop.dir/deps/libyrmcds/strerror.c.o
[ 64%] Building C object CMakeFiles/libh2o-evloop.dir/deps/libyrmcds/text_mode.c.o
[ 64%] Building C object CMakeFiles/libh2o-evloop.dir/deps/picohttpparser/picohttpparser.c.o
[ 66%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/cache.c.o
[ 66%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/file.c.o
[ 66%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/filecache.c.o
[ 66%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/hostinfo.c.o
[ 68%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/http1client.c.o
[ 68%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/memcached.c.o
[ 68%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/memory.c.o
[ 68%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/multithread.c.o
[ 70%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/serverutil.c.o
[ 70%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/socket.c.o
[ 70%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/socketpool.c.o
[ 70%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/string.c.o
[ 72%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/time.c.o
[ 72%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/timeout.c.o
[ 72%] Building C object CMakeFiles/libh2o-evloop.dir/lib/common/url.c.o
[ 72%] Building C object CMakeFiles/libh2o-evloop.dir/lib/core/config.c.o
[ 74%] Building C object CMakeFiles/libh2o-evloop.dir/lib/core/configurator.c.o
[ 74%] Building C object CMakeFiles/libh2o-evloop.dir/lib/core/context.c.o
[ 74%] Building C object CMakeFiles/libh2o-evloop.dir/lib/core/headers.c.o
[ 74%] Building C object CMakeFiles/libh2o-evloop.dir/lib/core/logconf.c.o
[ 76%] Building C object CMakeFiles/libh2o-evloop.dir/lib/core/proxy.c.o
[ 76%] Building C object CMakeFiles/libh2o-evloop.dir/lib/core/request.c.o
[ 76%] Building C object CMakeFiles/libh2o-evloop.dir/lib/core/token.c.o
[ 76%] Building C object CMakeFiles/libh2o-evloop.dir/lib/core/util.c.o
[ 78%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/access_log.c.o
[ 78%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/chunked.c.o
[ 78%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/compress.c.o
[ 78%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/compress/gzip.c.o
[ 80%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/errordoc.c.o
[ 80%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/expires.c.o
[ 80%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/fastcgi.c.o
[ 80%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/file.c.o
[ 82%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/headers.c.o
[ 82%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/mimemap.c.o
[ 82%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/proxy.c.o
[ 82%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/redirect.c.o
[ 84%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/reproxy.c.o
[ 84%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/throttle_resp.c.o
[ 84%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/status.c.o
[ 84%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/headers_util.c.o
[ 86%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/status/events.c.o
[ 86%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/status/requests.c.o
[ 86%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/http2_debug_state.c.o
[ 86%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/status/durations.c.o
[ 88%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/access_log.c.o
[ 88%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/compress.c.o
[ 88%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/errordoc.c.o
[ 88%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/expires.c.o
[ 90%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/fastcgi.c.o
[ 90%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/file.c.o
[ 90%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/headers.c.o
[ 90%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/proxy.c.o
[ 92%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/redirect.c.o
[ 92%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/reproxy.c.o
[ 92%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/throttle_resp.c.o
[ 92%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/status.c.o
[ 94%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/http2_debug_state.c.o
[ 94%] Building C object CMakeFiles/libh2o-evloop.dir/lib/handler/configurator/headers_util.c.o
[ 94%] Building C object CMakeFiles/libh2o-evloop.dir/lib/http1.c.o
[ 94%] Building C object CMakeFiles/libh2o-evloop.dir/lib/tunnel.c.o
[ 96%] Building C object CMakeFiles/libh2o-evloop.dir/lib/http2/cache_digests.c.o
[ 96%] Building C object CMakeFiles/libh2o-evloop.dir/lib/http2/casper.c.o
[ 96%] Building C object CMakeFiles/libh2o-evloop.dir/lib/http2/connection.c.o
[ 96%] Building C object CMakeFiles/libh2o-evloop.dir/lib/http2/frame.c.o
[ 98%] Building C object CMakeFiles/libh2o-evloop.dir/lib/http2/hpack.c.o
[ 98%] Building C object CMakeFiles/libh2o-evloop.dir/lib/http2/scheduler.c.o
[ 98%] Building C object CMakeFiles/libh2o-evloop.dir/lib/http2/stream.c.o
[ 98%] Building C object CMakeFiles/libh2o-evloop.dir/lib/http2/http2_debug_state.c.o
[100%] Linking C static library libh2o-evloop.a
[100%] Built target libh2o-evloop
-- Downloading rocksdb-5.9.2...
-- Extracting rocksdb-5.9.2...
Building rocksdb-5.9.2 locally...
rm -f db_bench table_reader_bench cache_bench memtablerep_bench column_aware_encoding_exp persistent_cache_bench sst_dump db_sanity_test db_stress write_stress ldb db_repl_stress rocksdb_dump rocksdb_undump blob_dump  db_basic_test db_encryption_test db_test2 external_sst_file_basic_test auto_roll_logger_test bloom_test dynamic_bloom_test c_test checkpoint_test crc32c_test coding_test inlineskiplist_test env_basic_test env_test hash_test thread_local_test rate_limiter_test perf_context_test iostats_context_test db_wal_test db_block_cache_test db_test db_blob_index_test db_bloom_filter_test db_iter_test db_log_iter_test db_compaction_filter_test db_compaction_test db_dynamic_level_test db_flush_test db_inplace_update_test db_iterator_test db_memtable_test db_merge_operator_test db_options_test db_range_del_test db_sst_test db_tailing_iter_test db_universal_compaction_test db_io_failure_test db_properties_test db_table_properties_test db_statistics_test db_write_test autovector_test blob_db_test cleanable_test column_family_test table_properties_collector_test arena_test block_test cache_test corruption_test slice_transform_test dbformat_test fault_injection_test filelock_test filename_test file_reader_writer_test block_based_filter_block_test full_filter_block_test partitioned_filter_block_test hash_table_test histogram_test log_test manual_compaction_test mock_env_test memtable_list_test merge_helper_test memory_test merge_test merger_test util_merge_operators_test options_file_test redis_test reduce_levels_test plain_table_db_test comparator_db_test external_sst_file_test prefix_test skiplist_test write_buffer_manager_test stringappend_test cassandra_format_test cassandra_functional_test cassandra_row_merge_test cassandra_serialize_test ttl_test date_tiered_test backupable_db_test document_db_test json_document_test sim_cache_test spatial_db_test version_edit_test version_set_test compaction_picker_test version_builder_test file_indexer_test write_batch_test write_batch_with_index_test write_controller_test deletefile_test table_test geodb_test delete_scheduler_test options_test options_settable_test options_util_test event_logger_test timer_queue_test cuckoo_table_builder_test cuckoo_table_reader_test cuckoo_table_db_test flush_job_test wal_manager_test listener_test compaction_iterator_test compaction_job_test thread_list_test sst_dump_test column_aware_encoding_test compact_files_test optimistic_transaction_test write_callback_test heap_test compact_on_deletion_collector_test compaction_job_stats_test option_change_migration_test transaction_test ldb_cmd_test persistent_cache_test statistics_test lua_test range_del_aggregator_test lru_cache_test object_registry_test repair_test env_timed_test write_prepared_transaction_test  librocksdb.a librocksdb.dylib librocksdb.5.dylib librocksdb.5.9.dylib librocksdb.5.9.2.dylib
rm -rf  make_config.mk shared-objects t LOG /var/folders/q6/4t8jjl55107gpwfx45x37wcw0000gn/T//rocksdb.HSqu unity.cc jls jl ios-x86 ios-arm scan_build_report
find . -name "*.[oda]" -exec rm -f {} \;
find . -type f -regex ".*\.\(\(gcda\)\|\(gcno\)\)" -exec rm {} \;
rm -rf bzip2* snappy* zlib* lz4* zstd*
cd java; /Applications/Xcode.app/Contents/Developer/usr/bin/make clean
rm -rf include/*
rm -rf test-libs/
rm -rf target
rm -rf benchmark/target
rm -rf samples/target
  GEN      util/build_version.cc
  GEN      util/build_version.cc
  CC       cache/clock_cache.o
  CC       cache/lru_cache.o
  CC       cache/sharded_cache.o
  CC       db/builder.o
  CC       db/c.o
  CC       db/column_family.o
  CC       db/compacted_db_impl.o
  CC       db/compaction.o
  CC       db/compaction_iterator.o
  CC       db/compaction_job.o
  CC       db/compaction_picker.o
  CC       db/compaction_picker_universal.o
  CC       db/convenience.o
  CC       db/db_filesnapshot.o
  CC       db/db_impl.o
  CC       db/db_impl_compaction_flush.o
  CC       db/db_impl_debug.o
  CC       db/db_impl_experimental.o
  CC       db/db_impl_files.o
  CC       db/db_impl_open.o
  CC       db/db_impl_readonly.o
  CC       db/db_impl_write.o
  CC       db/db_info_dumper.o
  CC       db/db_iter.o
  CC       db/dbformat.o
  CC       db/event_helpers.o
  CC       db/experimental.o
  CC       db/external_sst_file_ingestion_job.o
  CC       db/file_indexer.o
  CC       db/flush_job.o
  CC       db/flush_scheduler.o
  CC       db/forward_iterator.o
  CC       db/internal_stats.o
  CC       db/log_reader.o
  CC       db/log_writer.o
  CC       db/malloc_stats.o
  CC       db/managed_iterator.o
  CC       db/memtable.o
  CC       db/memtable_list.o
  CC       db/merge_helper.o
  CC       db/merge_operator.o
  CC       db/range_del_aggregator.o
  CC       db/repair.o
  CC       db/snapshot_impl.o
  CC       db/table_cache.o
  CC       db/table_properties_collector.o
  CC       db/transaction_log_impl.o
  CC       db/version_builder.o
  CC       db/version_edit.o
  CC       db/version_set.o
  CC       db/wal_manager.o
  CC       db/write_batch.o
  CC       db/write_batch_base.o
  CC       db/write_controller.o
  CC       db/write_thread.o
  CC       env/env.o
  CC       env/env_chroot.o
  CC       env/env_encryption.o
  CC       env/env_hdfs.o
  CC       env/env_posix.o
  CC       env/io_posix.o
  CC       env/mock_env.o
  CC       memtable/alloc_tracker.o
  CC       memtable/hash_cuckoo_rep.o
  CC       memtable/hash_linklist_rep.o
  CC       memtable/hash_skiplist_rep.o
  CC       memtable/skiplistrep.o
  CC       memtable/vectorrep.o
  CC       memtable/write_buffer_manager.o
  CC       monitoring/histogram.o
  CC       monitoring/histogram_windowing.o
  CC       monitoring/instrumented_mutex.o
  CC       monitoring/iostats_context.o
  CC       monitoring/perf_context.o
  CC       monitoring/perf_level.o
  CC       monitoring/statistics.o
  CC       monitoring/thread_status_impl.o
  CC       monitoring/thread_status_updater.o
  CC       monitoring/thread_status_updater_debug.o
  CC       monitoring/thread_status_util.o
  CC       monitoring/thread_status_util_debug.o
  CC       options/cf_options.o
  CC       options/db_options.o
  CC       options/options.o
  CC       options/options_helper.o
  CC       options/options_parser.o
  CC       options/options_sanity_check.o
  CC       port/port_posix.o
  CC       port/stack_trace.o
  CC       table/adaptive_table_factory.o
  CC       table/block.o
  CC       table/block_based_filter_block.o
  CC       table/block_based_table_builder.o
  CC       table/block_based_table_factory.o
  CC       table/block_based_table_reader.o
  CC       table/block_builder.o
  CC       table/block_prefix_index.o
  CC       table/bloom_block.o
  CC       table/cuckoo_table_builder.o
  CC       table/cuckoo_table_factory.o
  CC       table/cuckoo_table_reader.o
  CC       table/flush_block_policy.o
  CC       table/format.o
  CC       table/full_filter_block.o
  CC       table/get_context.o
  CC       table/index_builder.o
  CC       table/iterator.o
  CC       table/merging_iterator.o
  CC       table/meta_blocks.o
  CC       table/partitioned_filter_block.o
  CC       table/persistent_cache_helper.o
  CC       table/plain_table_builder.o
  CC       table/plain_table_factory.o
  CC       table/plain_table_index.o
  CC       table/plain_table_key_coding.o
  CC       table/plain_table_reader.o
  CC       table/sst_file_writer.o
  CC       table/table_properties.o
  CC       table/two_level_iterator.o
  CC       tools/dump/db_dump_tool.o
  CC       util/arena.o
  CC       util/auto_roll_logger.o
  CC       util/bloom.o
  CC       util/build_version.o
  CC       util/coding.o
  CC       util/compaction_job_stats_impl.o
  CC       util/comparator.o
  CC       util/concurrent_arena.o
  CC       util/crc32c.o
  CC       util/delete_scheduler.o
  CC       util/dynamic_bloom.o
  CC       util/event_logger.o
  CC       util/file_reader_writer.o
  CC       util/file_util.o
  CC       util/filename.o
  CC       util/filter_policy.o
  CC       util/hash.o
  CC       util/log_buffer.o
  CC       util/murmurhash.o
  CC       util/random.o
  CC       util/rate_limiter.o
  CC       util/slice.o
  CC       util/sst_file_manager_impl.o
  CC       util/status.o
  CC       util/status_message.o
  CC       util/string_util.o
  CC       util/sync_point.o
  CC       util/thread_local.o
  CC       util/threadpool_imp.o
  CC       util/transaction_test_util.o
  CC       util/xxhash.o
  CC       utilities/backupable/backupable_db.o
  CC       utilities/blob_db/blob_db.o
  CC       utilities/blob_db/blob_db_impl.o
  CC       utilities/blob_db/blob_file.o
  CC       utilities/blob_db/blob_log_format.o
  CC       utilities/blob_db/blob_log_reader.o
  CC       utilities/blob_db/blob_log_writer.o
  CC       utilities/blob_db/ttl_extractor.o
  CC       utilities/cassandra/cassandra_compaction_filter.o
  CC       utilities/cassandra/format.o
  CC       utilities/cassandra/merge_operator.o
  CC       utilities/checkpoint/checkpoint_impl.o
  CC       utilities/compaction_filters/remove_emptyvalue_compactionfilter.o
  CC       utilities/convenience/info_log_finder.o
  CC       utilities/date_tiered/date_tiered_db_impl.o
  CC       utilities/debug.o
  CC       utilities/document/document_db.o
  CC       utilities/document/json_document.o
  CC       utilities/document/json_document_builder.o
  CC       utilities/env_mirror.o
  CC       utilities/env_timed.o
  CC       utilities/geodb/geodb_impl.o
  CC       utilities/leveldb_options/leveldb_options.o
  CC       utilities/lua/rocks_lua_compaction_filter.o
  CC       utilities/memory/memory_util.o
  CC       utilities/merge_operators/max.o
  CC       utilities/merge_operators/put.o
  CC       utilities/merge_operators/string_append/stringappend.o
  CC       utilities/merge_operators/string_append/stringappend2.o
  CC       utilities/merge_operators/uint64add.o
  CC       utilities/option_change_migration/option_change_migration.o
  CC       utilities/options/options_util.o
  CC       utilities/persistent_cache/block_cache_tier.o
  CC       utilities/persistent_cache/block_cache_tier_file.o
  CC       utilities/persistent_cache/block_cache_tier_metadata.o
  CC       utilities/persistent_cache/persistent_cache_tier.o
  CC       utilities/persistent_cache/volatile_tier_impl.o
  CC       utilities/redis/redis_lists.o
  CC       utilities/simulator_cache/sim_cache.o
  CC       utilities/spatialdb/spatial_db.o
  CC       utilities/table_properties_collectors/compact_on_deletion_collector.o
  CC       utilities/transactions/optimistic_transaction.o
  CC       utilities/transactions/optimistic_transaction_db_impl.o
  CC       utilities/transactions/pessimistic_transaction.o
  CC       utilities/transactions/pessimistic_transaction_db.o
  CC       utilities/transactions/snapshot_checker.o
  CC       utilities/transactions/transaction_base.o
  CC       utilities/transactions/transaction_db_mutex_impl.o
  CC       utilities/transactions/transaction_lock_mgr.o
  CC       utilities/transactions/transaction_util.o
  CC       utilities/transactions/write_prepared_txn.o
  CC       utilities/transactions/write_prepared_txn_db.o
  CC       utilities/ttl/db_ttl_impl.o
  CC       utilities/write_batch_with_index/write_batch_with_index.o
  CC       utilities/write_batch_with_index/write_batch_with_index_internal.o
  CC       tools/ldb_cmd.o
  CC       tools/ldb_tool.o
  CC       tools/sst_dump_tool.o
  CC       utilities/blob_db/blob_dump_tool.o
  AR       librocksdb.a
ar: creating archive librocksdb.a
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: librocksdb.a(db_impl_debug.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: librocksdb.a(rocks_lua_compaction_filter.o) has no symbols
-- Downloading Google Test...
-- Extracting Google Test...
Configuring Google Test...
-- The CXX compiler identification is AppleClang 9.1.0.9020039
-- The C compiler identification is AppleClang 9.1.0.9020039
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Found PythonInterp: /usr/local/bin/python (found version "2.7.11") 
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - found
-- Found Threads: TRUE  
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/cameron/code/typesense/external-Darwin/googletest-release-1.8.0/googletest/build
Building Google Test locally...
Scanning dependencies of target gtest
[ 25%] Building CXX object CMakeFiles/gtest.dir/src/gtest-all.cc.o
[ 50%] Linking CXX static library libgtest.a
[ 50%] Built target gtest
Scanning dependencies of target gtest_main
[ 75%] Building CXX object CMakeFiles/gtest_main.dir/src/gtest_main.cc.o
[100%] Linking CXX static library libgtest_main.a
[100%] Built target gtest_main
-- Downloading test resource - words.txt
-- Downloading test resource - uuid.txt
-- Downloading G3log...
-- Extracting G3log...
Configuring G3log...
-- The CXX compiler identification is AppleClang 9.1.0.9020039
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The C compiler identification is AppleClang 9.1.0.9020039
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
A 64-bit OS detected
-DUSE_DYNAMIC_LOGGING_LEVELS=OFF
-DCHANGE_G3LOG_DEBUG_TO_DBUG=OFF 	(Debuggin logging level is 'DEBUG')
-DENABLE_FATAL_SIGNALHANDLING=ON	Fatal signal handler is enabled




COMPILE_DEFINITIONS:  
End of COMPILE_DEFINITIONS
Generated src/g3log/generated_definitions.hpp
******************** START *************************
// AUTO GENERATED MACRO DEFINITIONS FOR G3LOG

/** ==========================================================================
* 2015 by KjellKod.cc. This is PUBLIC DOMAIN to use at your own risk and comes
* with no warranties. This code is yours to share, use and modify with no
* strings attached and no restrictions or obligations.
* 
* For more information see g3log/LICENSE or refer refer to http://unlicense.org
* ============================================================================*/
#pragma once

// CMake induced definitions below. See g3log/Options.cmake for details.


******************** END *************************

cmake for Clang 
-DADD_FATAL_EXAMPLE=ON		[contract][sigsegv][fatal choice] are examples of when g3log comes in handy
-DADD_G3LOG_BENCH_PERFORMANCE=OFF
-DADD_G3LOG_UNIT_TEST=OFF
Extracting git software version
Software Version: 1.2.400

Option to install using 'sudo make install
Installation locations: 
====================
Headers: /usr/local/include/g3log
Library installation directory: /usr/local/lib
For more information please see g3log/CPackLists.txt


To install: sudo dpkg -i g3log-***Linux.deb
To list package contents: sudo dpkg --contents g3log-***Linux.deb
List content of the installed package: sudo dpkg -L g3log
To remove: sudo dpkg -r g3log



      *******************************************************************
      Please do 'make clean-cmake' before next cmake generation.
         It is a good idea to purge your build directory of CMake
         generated cache files
      *******************************************************************
       
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/cameron/code/typesense/external-Darwin/g3log-1.3/build
Building G3log locally...
Scanning dependencies of target g3logger
[  4%] Building CXX object CMakeFiles/g3logger.dir/src/crashhandler_unix.cpp.o
[  8%] Building CXX object CMakeFiles/g3logger.dir/src/filesink.cpp.o
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/filesink.cpp:44:95: warning: braces around scalar initializer [-Wbraced-scalar-init]
      exit_msg.append(localtime_formatted(systemtime_now(), internal::time_formatted)).append({"\n"});
                                                                                              ^~~~~~
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/filesink.cpp:47:23: warning: braces around scalar initializer [-Wbraced-scalar-init]
      exit_msg.append({"Log file at: ["}).append(_log_file_with_path).append({"]\n"});
                      ^~~~~~~~~~~~~~~~~~
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/filesink.cpp:47:78: warning: braces around scalar initializer [-Wbraced-scalar-init]
      exit_msg.append({"Log file at: ["}).append(_log_file_with_path).append({"]\n"});
                                                                             ^~~~~~~
3 warnings generated.
[ 12%] Building CXX object CMakeFiles/g3logger.dir/src/g3log.cpp.o
[ 16%] Building CXX object CMakeFiles/g3logger.dir/src/logcapture.cpp.o
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/logcapture.cpp:51:22: warning: braces around scalar initializer [-Wbraced-scalar-init]
      _stack_trace = {"\n*******\tSTACKDUMP *******\n"};
                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 warning generated.
[ 20%] Building CXX object CMakeFiles/g3logger.dir/src/loglevels.cpp.o
[ 25%] Building CXX object CMakeFiles/g3logger.dir/src/logmessage.cpp.o
[ 29%] Building CXX object CMakeFiles/g3logger.dir/src/logworker.cpp.o
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/logworker.cpp:32:61: warning: braces around scalar initializer [-Wbraced-scalar-init]
         err_msg.append(uniqueMsg.get()->toString()).append({"]\n"});
                                                            ^~~~~~~
1 warning generated.
[ 33%] Building CXX object CMakeFiles/g3logger.dir/src/time.cpp.o
[ 37%] Linking CXX static library libg3logger.a
[ 37%] Built target g3logger
Scanning dependencies of target g3log-FATAL-sigsegv
[ 41%] Building CXX object CMakeFiles/g3log-FATAL-sigsegv.dir/example/main_sigsegv.cpp.o
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/example/main_sigsegv.cpp:41:152: warning: more '%' conversions than data arguments [-Wformat]
      LOGF(DEBUG, "ILLEGAL PRINTF_SYNTAX EXAMPLE. WILL GENERATE compiler warning.\n\nbadly formatted message:[Printf-type %s is the number 1 for many %s]", logging.c_str());
                                                                                                                                                      ~^
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/g3log/g3log.hpp:202:65: note: expanded from macro 'LOGF'
   if(g3::logLevel(level)) INTERNAL_LOG_MESSAGE(level).capturef(printf_like_message, ##__VA_ARGS__)
                                                                ^
1 warning generated.
[ 45%] Linking CXX executable g3log-FATAL-sigsegv
[ 45%] Built target g3log-FATAL-sigsegv
Scanning dependencies of target g3log-FATAL-contract
[ 50%] Building CXX object CMakeFiles/g3log-FATAL-contract.dir/example/main_contract.cpp.o
[ 54%] Linking CXX executable g3log-FATAL-contract
[ 54%] Built target g3log-FATAL-contract
Scanning dependencies of target g3log-FATAL-choice
[ 58%] Building CXX object CMakeFiles/g3log-FATAL-choice.dir/example/main_fatal_choice.cpp.o
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/example/main_fatal_choice.cpp:93:76: warning: format specifies type 'int' but the argument has type 'const char *' [-Wformat]
      LOGF(INFO, "2nd attempt at ILLEGAL PRINTF_SYNTAX %d EXAMPLE. %s %s", "hello", 1);
                                                       ~~                  ^~~~~~~
                                                       %s
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/g3log/g3log.hpp:202:88: note: expanded from macro 'LOGF'
   if(g3::logLevel(level)) INTERNAL_LOG_MESSAGE(level).capturef(printf_like_message, ##__VA_ARGS__)
                                                                                       ^~~~~~~~~~~
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/example/main_fatal_choice.cpp:93:85: warning: format specifies type 'char *' but the argument has type 'int' [-Wformat]
      LOGF(INFO, "2nd attempt at ILLEGAL PRINTF_SYNTAX %d EXAMPLE. %s %s", "hello", 1);
                                                                   ~~               ^
                                                                   %d
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/g3log/g3log.hpp:202:88: note: expanded from macro 'LOGF'
   if(g3::logLevel(level)) INTERNAL_LOG_MESSAGE(level).capturef(printf_like_message, ##__VA_ARGS__)
                                                                                       ^~~~~~~~~~~
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/example/main_fatal_choice.cpp:93:72: warning: more '%' conversions than data arguments [-Wformat]
      LOGF(INFO, "2nd attempt at ILLEGAL PRINTF_SYNTAX %d EXAMPLE. %s %s", "hello", 1);
                                                                      ~^
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/g3log/g3log.hpp:202:65: note: expanded from macro 'LOGF'
   if(g3::logLevel(level)) INTERNAL_LOG_MESSAGE(level).capturef(printf_like_message, ##__VA_ARGS__)
                                                                ^
3 warnings generated.
[ 62%] Linking CXX executable g3log-FATAL-choice
[ 62%] Built target g3log-FATAL-choice
Scanning dependencies of target g3logger_shared
[ 66%] Building CXX object CMakeFiles/g3logger_shared.dir/src/crashhandler_unix.cpp.o
[ 70%] Building CXX object CMakeFiles/g3logger_shared.dir/src/filesink.cpp.o
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/filesink.cpp:44:95: warning: braces around scalar initializer [-Wbraced-scalar-init]
      exit_msg.append(localtime_formatted(systemtime_now(), internal::time_formatted)).append({"\n"});
                                                                                              ^~~~~~
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/filesink.cpp:47:23: warning: braces around scalar initializer [-Wbraced-scalar-init]
      exit_msg.append({"Log file at: ["}).append(_log_file_with_path).append({"]\n"});
                      ^~~~~~~~~~~~~~~~~~
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/filesink.cpp:47:78: warning: braces around scalar initializer [-Wbraced-scalar-init]
      exit_msg.append({"Log file at: ["}).append(_log_file_with_path).append({"]\n"});
                                                                             ^~~~~~~
3 warnings generated.
[ 75%] Building CXX object CMakeFiles/g3logger_shared.dir/src/g3log.cpp.o
[ 79%] Building CXX object CMakeFiles/g3logger_shared.dir/src/logcapture.cpp.o
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/logcapture.cpp:51:22: warning: braces around scalar initializer [-Wbraced-scalar-init]
      _stack_trace = {"\n*******\tSTACKDUMP *******\n"};
                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 warning generated.
[ 83%] Building CXX object CMakeFiles/g3logger_shared.dir/src/loglevels.cpp.o
[ 87%] Building CXX object CMakeFiles/g3logger_shared.dir/src/logmessage.cpp.o
[ 91%] Building CXX object CMakeFiles/g3logger_shared.dir/src/logworker.cpp.o
/Users/cameron/code/typesense/external-Darwin/g3log-1.3/src/logworker.cpp:32:61: warning: braces around scalar initializer [-Wbraced-scalar-init]
         err_msg.append(uniqueMsg.get()->toString()).append({"]\n"});
                                                            ^~~~~~~
1 warning generated.
[ 95%] Building CXX object CMakeFiles/g3logger_shared.dir/src/time.cpp.o
[100%] Linking CXX shared library libg3logger.dylib
[100%] Built target g3logger_shared
-- Could NOT find NGHTTP2 (missing: NGHTTP2_INCLUDE_DIR) 
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/cameron/code/typesense/build
+ make -C /Users/cameron/code/typesense/build
Scanning dependencies of target typesense-server
[  1%] Building CXX object CMakeFiles/typesense-server.dir/src/api.cpp.o
[  3%] Building CXX object CMakeFiles/typesense-server.dir/src/array.cpp.o
[  4%] Building CXX object CMakeFiles/typesense-server.dir/src/array_base.cpp.o
[  6%] Building CXX object CMakeFiles/typesense-server.dir/src/array_utils.cpp.o
[  8%] Building CXX object CMakeFiles/typesense-server.dir/src/art.cpp.o
In file included from /Users/cameron/code/typesense/src/art.cpp:13:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/queue:401:5: error: static_assert failed ""
    static_assert((is_same<_Tp, value_type>::value), "" );
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Users/cameron/code/typesense/src/art.cpp:909:68: note: in instantiation of template class 'std::__1::priority_queue<art_node *,
      std::__1::vector<const art_node *, std::__1::allocator<const art_node *> >, std::__1::function<bool
      (const art_node *, const art_node *)> >' requested here
            std::function<bool(const art_node*, const art_node*)>> q;
                                                                   ^
/Users/cameron/code/typesense/src/art.cpp:912:13: error: no matching conversion for functional-style cast from
      'bool (const art_node *, const art_node *)' to 'std::priority_queue<art_node *, std::vector<const art_node *>, std::function<bool
      (const art_node *, const art_node *)> >'
        q = std::priority_queue<art_node *, std::vector<const art_node *>,
            ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/queue:392:28: note: candidate constructor
      (the implicit copy constructor) not viable: no known conversion from 'bool (const art_node *, const art_node *)' to 'const
      std::__1::priority_queue<art_node *, std::__1::vector<const art_node *, std::__1::allocator<const art_node *> >,
      std::__1::function<bool (const art_node *, const art_node *)> >' for 1st argument
class _LIBCPP_TEMPLATE_VIS priority_queue
                           ^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/queue:392:28: note: candidate constructor
      (the implicit move constructor) not viable: no known conversion from 'bool (const art_node *, const art_node *)' to
      'std::__1::priority_queue<art_node *, std::__1::vector<const art_node *, std::__1::allocator<const art_node *> >,
      std::__1::function<bool (const art_node *, const art_node *)> >' for 1st argument
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/queue:392:28: note: candidate constructor
      (the implicit default constructor) not viable: requires 0 arguments, but 1 was provided
/Users/cameron/code/typesense/src/art.cpp:915:13: error: no matching conversion for functional-style cast from
      'bool (const art_node *, const art_node *)' to 'std::priority_queue<art_node *, std::vector<const art_node *>, std::function<bool
      (const art_node *, const art_node *)> >'
        q = std::priority_queue<art_node *, std::vector<const art_node *>,
            ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/queue:392:28: note: candidate constructor
      (the implicit copy constructor) not viable: no known conversion from 'bool (const art_node *, const art_node *)' to 'const
      std::__1::priority_queue<art_node *, std::__1::vector<const art_node *, std::__1::allocator<const art_node *> >,
      std::__1::function<bool (const art_node *, const art_node *)> >' for 1st argument
class _LIBCPP_TEMPLATE_VIS priority_queue
                           ^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/queue:392:28: note: candidate constructor
      (the implicit move constructor) not viable: no known conversion from 'bool (const art_node *, const art_node *)' to
      'std::__1::priority_queue<art_node *, std::__1::vector<const art_node *, std::__1::allocator<const art_node *> >,
      std::__1::function<bool (const art_node *, const art_node *)> >' for 1st argument
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/queue:392:28: note: candidate constructor
      (the implicit default constructor) not viable: requires 0 arguments, but 1 was provided
3 errors generated.
make[2]: *** [CMakeFiles/typesense-server.dir/src/art.cpp.o] Error 1
make[1]: *** [CMakeFiles/typesense-server.dir/all] Error 2
make: *** [all] Error 2

I have all the required dependencies installed.

Output of clang --version

Apple LLVM version 9.1.0 (clang-902.0.39.1)
Target: x86_64-apple-darwin17.4.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

Output of g++ --version

Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 9.1.0 (clang-902.0.39.1)
Target: x86_64-apple-darwin17.4.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

No stress if you can't help - just thought it might be something obvious I was missing.

TypeSense sever crashing on docker

Description

I'm running TypeSense on a Docker Image on Digital Occean and the server keeps crashing.

Steps to reproduce

This is a random behaviour.

Expected Behavior

Actual Behavior

Metadata

This is the error trace:

Ready to accept requests on port 8108
2019/05/24 17:34:29 176283 ERROR [collection.cpp->search:567] Could not locate the JSON document for sequence ID: 3_$SI_
2019/05/24 17:34:29 176305 ERROR [collection.cpp->search:567] Could not locate the JSON document for sequence ID: 3_$SI_�y*** Error in/opt/typesense-server': free(): invalid pointer: 0x0000000003f37980 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f8386ee97e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x7fe0a)[0x7f8386ef1e0a]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f8386ef598c]
/opt/typesense-server(ZN10Collection6searchENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt6vectorIS5_SaIS5_EERKS5_RKS8_RKS6_I7sort_bySaISD_EEimm14token_orderingbmN3spp15sparse_hash_setIS5_NSJ_8spp_hashIS5_EESt8equal_toIS5_ENSJ_27libc_allocator_with_reallocIS5_EEEESR+0x27f2)[0x56f5a2]
/opt/typesense-server(_Z10get_searchR8http_reqR8http_res+0x110b)[0x59c08b]
/opt/typesense-server(_ZN10HttpServer17catch_all_handlerEP16st_h2o_handler_tP12st_h2o_req_t+0x8a3)[0x5a8193]
/opt/typesense-server[0x60f2cd]
/opt/typesense-server[0x616cc0]
/opt/typesense-server[0x604251]
/opt/typesense-server[0x604378]
/opt/typesense-server(h2o_evloop_run+0x2d)[0x60600d]
/opt/typesense-server(_ZN10HttpServer3runEv+0x211)[0x5a8d51]
/opt/typesense-server(Z10run_serverRN7cmdline6parserEPFvvES3+0x9a3)[0x5f0e63]
/opt/typesense-server(main+0x11b)[0x5fd6fb]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f8386e92830]
/opt/typesense-server[0x556dfd]
======= Memory map: ========
00400000-00dc1000 r-xp 00000000 fc:01 1040565 /opt/typesense-server
00fc0000-00fe1000 rw-p 009c0000 fc:01 1040565 /opt/typesense-server
00fe1000-00ff6000 rw-p 00000000 00:00 0
020a2000-03f6c000 rw-p 00000000 00:00 0 [heap]
7f8330000000-7f8330021000 rw-p 00000000 00:00 0
7f8330021000-7f8334000000 ---p 00000000 00:00 0
7f8338000000-7f8338021000 rw-p 00000000 00:00 0
7f8338021000-7f833c000000 ---p 00000000 00:00 0
7f833c000000-7f833c021000 rw-p 00000000 00:00 0
7f833c021000-7f8340000000 ---p 00000000 00:00 0
7f8340000000-7f8340021000 rw-p 00000000 00:00 0
7f8340021000-7f8344000000 ---p 00000000 00:00 0
7f8344000000-7f8344021000 rw-p 00000000 00:00 0
7f8344021000-7f8348000000 ---p 00000000 00:00 0
7f8348000000-7f8348021000 rw-p 00000000 00:00 0
7f8348021000-7f834c000000 ---p 00000000 00:00 0
7f834c000000-7f834c021000 rw-p 00000000 00:00 0
7f834c021000-7f8350000000 ---p 00000000 00:00 0
7f8350000000-7f8350021000 rw-p 00000000 00:00 0
7f8350021000-7f8354000000 ---p 00000000 00:00 0
7f8354000000-7f8354021000 rw-p 00000000 00:00 0
7f8354021000-7f8358000000 ---p 00000000 00:00 0
7f8358000000-7f8358021000 rw-p 00000000 00:00 0
7f8358021000-7f835c000000 ---p 00000000 00:00 0
7f835c000000-7f835c021000 rw-p 00000000 00:00 0
7f835c021000-7f8360000000 ---p 00000000 00:00 0
7f8360000000-7f8360021000 rw-p 00000000 00:00 0
7f8360021000-7f8364000000 ---p 00000000 00:00 0
7f8364000000-7f8364021000 rw-p 00000000 00:00 0
7f8364021000-7f8368000000 ---p 00000000 00:00 0
7f8368000000-7f8368021000 rw-p 00000000 00:00 0
7f8368021000-7f836c000000 ---p 00000000 00:00 0
7f836f7df000-7f836f7e0000 ---p 00000000 00:00 0
7f836f7e0000-7f836ffe0000 rw-p 00000000 00:00 0
7f836ffe0000-7f836ffe1000 ---p 00000000 00:00 0
7f836ffe1000-7f83707e1000 rw-p 00000000 00:00 0
7f83707e1000-7f83707e2000 ---p 00000000 00:00 0
7f83707e2000-7f8370fe2000 rw-p 00000000 00:00 0
7f8370fe2000-7f8370fe3000 ---p 00000000 00:00 0
7f8370fe3000-7f83717e3000 rw-p 00000000 00:00 0
7f83757eb000-7f83757ec000 ---p 00000000 00:00 0
7f83757ec000-7f8375fec000 rw-p 00000000 00:00 0
7f8375fec000-7f8375fed000 ---p 00000000 00:00 0
7f8375fed000-7f83767ed000 rw-p 00000000 00:00 0
7f83767ed000-7f83767ee000 ---p 00000000 00:00 0
7f83767ee000-7f8376fee000 rw-p 00000000 00:00 0
7f8376fee000-7f8376fef000 ---p 00000000 00:00 0
7f8376fef000-7f83777ef000 rw-p 00000000 00:00 0
7f83777ef000-7f83777f0000 ---p 00000000 00:00 0
7f83777f0000-7f8377ff0000 rw-p 00000000 00:00 0
7f8377ff0000-7f8377ff1000 ---p 00000000 00:00 0
7f8377ff1000-7f83787f1000 rw-p 00000000 00:00 0
7f83787f1000-7f83787f2000 ---p 00000000 00:00 0
7f83787f2000-7f8378ff2000 rw-p 00000000 00:00 0
7f8378ff2000-7f8378ff3000 ---p 00000000 00:00 0
7f8378ff3000-7f83797f3000 rw-p 00000000 00:00 0
7f83797f3000-7f83797f4000 ---p 00000000 00:00 0
7f83797f4000-7f8379ff4000 rw-p 00000000 00:00 0
7f8379ff4000-7f8379ff5000 ---p 00000000 00:00 0
7f8379ff5000-7f837a7f5000 rw-p 00000000 00:00 0
7f837a7f5000-7f837a7f6000 ---p 00000000 00:00 0
7f837a7f6000-7f837aff6000 rw-p 00000000 00:00 0
7f837aff6000-7f837aff7000 ---p 00000000 00:00 0
7f837aff7000-7f837b7f7000 rw-p 00000000 00:00 0
7f837b7f7000-7f837b7f8000 ---p 00000000 00:00 0
7f837b7f8000-7f837bff8000 rw-p 00000000 00:00 0
7f837bff8000-7f837bff9000 ---p 00000000 00:00 0
7f837bff9000-7f837c7f9000 rw-p 00000000 00:00 0
7f837c7f9000-7f837c7fa000 ---p 00000000 00:00 0
7f837c7fa000-7f837cffa000 rw-p 00000000 00:00 0
7f837cffa000-7f837cffb000 ---p 00000000 00:00 0
7f837cffb000-7f837d7fb000 rw-p 00000000 00:00 0
7f837d7fb000-7f837d7fc000 ---p 00000000 00:00 0
7f837d7fc000-7f837dffc000 rw-p 00000000 00:00 0
7f837dffc000-7f837dffd000 ---p 00000000 00:00 0
7f837dffd000-7f837e7fd000 rw-p 00000000 00:00 0
7f837e7fd000-7f837e7fe000 ---p 00000000 00:00 0
7f837e7fe000-7f837effe000 rw-p 00000000 00:00 0
7f837effe000-7f837efff000 ---p 00000000 00:00 0
7f837efff000-7f837f7ff000 rw-p 00000000 00:00 0
7f837f7ff000-7f837f800000 ---p 00000000 00:00 0
7f837f800000-7f8380000000 rw-p 00000000 00:00 0
7f8380000000-7f8380021000 rw-p 00000000 00:00 0
7f8380021000-7f8384000000 ---p 00000000 00:00 0
7f8384457000-7f838446d000 r-xp 00000000 fc:01 1033195 /lib/x86_64-linux-gnu/libgcc_s.so.1
7f838446d000-7f838466c000 ---p 00016000 fc:01 1033195 /lib/x86_64-linux-gnu/libgcc_s.so.1
7f838466c000-7f838466d000 rw-p 00015000 fc:01 1033195 /lib/x86_64-linux-gnu/libgcc_s.so.1
7f838466d000-7f838466e000 ---p 00000000 00:00 0
7f838466e000-7f8384e6e000 rw-p 00000000 00:00 0
7f8384e6e000-7f8384e6f000 ---p 00000000 00:00 0
7f8384e6f000-7f838566f000 rw-p 00000000 00:00 0
7f838566f000-7f8385670000 ---p 00000000 00:00 0
7f8385670000-7f8385e70000 rw-p 00000000 00:00 0
7f8385e70000-7f8385e71000 ---p 00000000 00:00 0
7f8385e71000-7f8386671000 rw-p 00000000 00:00 0
7f8386671000-7f8386672000 ---p 00000000 00:00 0
7f8386672000-7f8386e72000 rw-p 00000000 00:00 0
7f8386e72000-7f8387031000 r-xp 00000000 fc:01 1033174 /lib/x86_64-linux-gnu/libc-2.23.so
7f8387031000-7f8387231000 ---p 001bf000 fc:01 1033174 /lib/x86_64-linux-gnu/libc-2.23.so
7f8387231000-7f8387235000 r--p 001bf000 fc:01 1033174 /lib/x86_64-linux-gnu/libc-2.23.so
7f8387235000-7f8387237000 rw-p 001c3000 fc:01 1033174 /lib/x86_64-linux-gnu/libc-2.23.so
7f8387237000-7f838723b000 rw-p 00000000 00:00 0
7f838723b000-7f8387343000 r-xp 00000000 fc:01 1033206 /lib/x86_64-linux-gnu/libm-2.23.so
7f8387343000-7f8387542000 ---p 00108000 fc:01 1033206 /lib/x86_64-linux-gnu/libm-2.23.so
7f8387542000-7f8387543000 r--p 00107000 fc:01 1033206 /lib/x86_64-linux-gnu/libm-2.23.so
7f8387543000-7f8387544000 rw-p 00108000 fc:01 1033206 /lib/x86_64-linux-gnu/libm-2.23.so
7f8387544000-7f8387547000 r-xp 00000000 fc:01 1033187 /lib/x86_64-linux-gnu/libdl-2.23.so
7f8387547000-7f8387746000 ---p 00003000 fc:01 1033187 /lib/x86_64-linux-gnu/libdl-2.23.so
7f8387746000-7f8387747000 r--p 00002000 fc:01 1033187 /lib/x86_64-linux-gnu/libdl-2.23.so
7f8387747000-7f8387748000 rw-p 00003000 fc:01 1033187 /lib/x86_64-linux-gnu/libdl-2.23.so
7f8387748000-7f838774f000 r-xp 00000000 fc:01 1033248 /lib/x86_64-linux-gnu/librt-2.23.so
7f838774f000-7f838794e000 ---p 00007000 fc:01 1033248 /lib/x86_64-linux-gnu/librt-2.23.so
7f838794e000-7f838794f000 r--p 00006000 fc:01 1033248 /lib/x86_64-linux-gnu/librt-2.23.so
7f838794f000-7f8387950000 rw-p 00007000 fc:01 1033248 /lib/x86_64-linux-gnu/librt-2.23.so
7f8387950000-7f8387968000 r-xp 00000000 fc:01 1033242 /lib/x86_64-linux-gnu/libpthread-2.23.so
7f8387968000-7f8387b67000 ---p 00018000 fc:01 1033242 /lib/x86_64-linux-gnu/libpthread-2.23.so
7f8387b67000-7f8387b68000 r--p 00017000 fc:01 1033242 /lib/x86_64-linux-gnu/libpthread-2.23.so
7f8387b68000-7f8387b69000 rw-p 00018000 fc:01 1033242 /lib/x86_64-linux-gnu/libpthread-2.23.so
7f8387b69000-7f8387b6d000 rw-p 00000000 00:00 0
7f8387b6d000-7f8387b93000 r-xp 00000000 fc:01 1033154 /lib/x86_64-linux-gnu/ld-2.23.so
7f8387d87000-7f8387d8e000 rw-p 00000000 00:00 0
7f8387d8f000-7f8387d92000 rw-p 00000000 00:00 0
7f8387d92000-7f8387d93000 r--p 00025000 fc:01 1033154 /lib/x86_64-linux-gnu/ld-2.23.so
7f8387d93000-7f8387d94000 rw-p 00026000 fc:01 1033154 /lib/x86_64-linux-gnu/ld-2.23.so
7f8387d94000-7f8387d95000 rw-p 00000000 00:00 0
7ffdabad0000-7ffdabaf1000 rw-p 00000000 00:00 0 [stack]
7ffdabb67000-7ffdabb6a000 r--p 00000000 00:00 0 [vvar]
7ffdabb6a000-7ffdabb6c000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
2019/05/24 17:34:29 180337

***** FATAL SIGNAL RECEIVED *******
Received fatal signal: SIGABRT(6) PID: 1

***** SIGNAL SIGABRT(6)

******* STACKDUMP *******
stack dump [1] /opt/typesense-server() [0x6b0f68]
stack dump [2] /lib/x86_64-linux-gnu/libpthread.so.0+0x113e0 [0x7f83879613e0]
stack dump [3] /lib/x86_64-linux-gnu/libc.so.6gsignal+0x38 [0x7f8386ea7428]
stack dump [4] /lib/x86_64-linux-gnu/libc.so.6abort+0x16a [0x7f8386ea902a]
stack dump [5] /lib/x86_64-linux-gnu/libc.so.6+0x777ea [0x7f8386ee97ea]
stack dump [6] /lib/x86_64-linux-gnu/libc.so.6+0x7fe0a [0x7f8386ef1e0a]
stack dump [7] /lib/x86_64-linux-gnu/libc.so.6cfree+0x4c [0x7f8386ef598c]

    stack dump [8]  /opt/typesense-server : Collection::search(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, std::vector<sort_by, std::allocator<sort_by> > const&, int, unsigned long, unsigned long, token_ordering, bool, unsigned long, spp::sparse_hash_set<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, spp::spp_hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, spp::libc_allocator_with_realloc<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, spp::sparse_hash_set<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, spp::spp_hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, spp::libc_allocator_with_realloc<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >)+0x27f2 [0x56f5a2]

    stack dump [9]  /opt/typesense-server : get_search(http_req&, http_res&)+0x110b [0x59c08b]

    stack dump [10]  /opt/typesense-server : HttpServer::catch_all_handler(st_h2o_handler_t*, st_h2o_req_t*)+0x8a3 [0x5a8193]
    stack dump [11]  /opt/typesense-server() [0x60f2cd]
    stack dump [12]  /opt/typesense-server() [0x616cc0]
    stack dump [13]  /opt/typesense-server() [0x604251]
    stack dump [14]  /opt/typesense-server() [0x604378]
    stack dump [15]  /opt/typesense-serverh2o_evloop_run+0x2d [0x60600d]

    stack dump [16]  /opt/typesense-server : HttpServer::run()+0x211 [0x5a8d51]

    stack dump [17]  /opt/typesense-server : run_server(cmdline::parser&, void (*)(), void (*)())+0x9a3 [0x5f0e63]
    stack dump [18]  /opt/typesense-servermain+0x11b [0x5fd6fb]
    stack dump [19]  /lib/x86_64-linux-gnu/libc.so.6__libc_start_main+0xf0 [0x7f8386e92830]
    stack dump [20]  /opt/typesense-server() [0x556dfd]

Exiting after fatal event (FATAL_SIGNAL). Fatal type: SIGABRT
Log content flushed sucessfully to sink

2019/05/24 17:34:29 180337

***** FATAL SIGNAL RECEIVED *******
Received fatal signal: SIGABRT(6) PID: 1

***** SIGNAL SIGABRT(6)

******* STACKDUMP *******
stack dump [1] /opt/typesense-server() [0x6b0f68]
stack dump [2] /lib/x86_64-linux-gnu/libpthread.so.0+0x113e0 [0x7f83879613e0]
stack dump [3] /lib/x86_64-linux-gnu/libc.so.6gsignal+0x38 [0x7f8386ea7428]
stack dump [4] /lib/x86_64-linux-gnu/libc.so.6abort+0x16a [0x7f8386ea902a]
stack dump [5] /lib/x86_64-linux-gnu/libc.so.6+0x777ea [0x7f8386ee97ea]
stack dump [6] /lib/x86_64-linux-gnu/libc.so.6+0x7fe0a [0x7f8386ef1e0a]
stack dump [7] /lib/x86_64-linux-gnu/libc.so.6cfree+0x4c [0x7f8386ef598c]

    stack dump [8]  /opt/typesense-server : Collection::search(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, std::vector<sort_by, std::allocator<sort_by> > const&, int, unsigned long, unsigned long, token_ordering, bool, unsigned long, spp::sparse_hash_set<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, spp::spp_hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, spp::libc_allocator_with_realloc<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, spp::sparse_hash_set<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, spp::spp_hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, spp::libc_allocator_with_realloc<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >)+0x27f2 [0x56f5a2]

    stack dump [9]  /opt/typesense-server : get_search(http_req&, http_res&)+0x110b [0x59c08b]

    stack dump [10]  /opt/typesense-server : HttpServer::catch_all_handler(st_h2o_handler_t*, st_h2o_req_t*)+0x8a3 [0x5a8193]
    stack dump [11]  /opt/typesense-server() [0x60f2cd]
    stack dump [12]  /opt/typesense-server() [0x616cc0]
    stack dump [13]  /opt/typesense-server() [0x604251]
    stack dump [14]  /opt/typesense-server() [0x604378]
    stack dump [15]  /opt/typesense-serverh2o_evloop_run+0x2d [0x60600d]

    stack dump [16]  /opt/typesense-server : HttpServer::run()+0x211 [0x5a8d51]

    stack dump [17]  /opt/typesense-server : run_server(cmdline::parser&, void (*)(), void (*)())+0x9a3 [0x5f0e63]
    stack dump [18]  /opt/typesense-servermain+0x11b [0x5fd6fb]
    stack dump [19]  /lib/x86_64-linux-gnu/libc.so.6__libc_start_main+0xf0 [0x7f8386e92830]
    stack dump [20]  /opt/typesense-server() [0x556dfd]

Exiting after fatal event (FATAL_SIGNAL). Fatal type: SIGABRT
Log content flushed sucessfully to sink

exitWithDefaultSignalHandler:238. Exiting due to FATAL_SIGNAL, 6

Typsense Version:
0.9.2

OS:
Ubuntu 18.04 Docker Image on Digital Occean

Allow query to be optional?

Description

I was trying to see if we can query just based on some filter_by conditions and looks like that's not currently possible. Looks like q is a required parameter, so I couldn't leave it out.

I was thinking of this use case: Say with the companies example, I want to "search" for a list of companies that have less than 100 employees. Do we want to support this use case?

Steps to reproduce

With the Ruby library:

typesense.collections['companies'].documents.create(
  'id' => '124',
  'company_name' => 'Stark Industries',
  'num_employees' => 5215,
  'country' => 'USA'
)

typesense.collections['companies'].documents.create(
  'id' => '127',
  'company_name' => 'Stark Corp',
  'num_employees' => 1031,
  'country' => 'USA'
)

typesense.collections['companies'].documents.create(
  'id' => '125',
  'company_name' => 'Acme Corp',
  'num_employees' => 1002,
  'country' => 'France'
)

typesense.collections['companies'].documents.create(
  'id' => '126',
  'company_name' => 'Doofenshmirtz Inc',
  'num_employees' => 2,
  'country' => 'Tri-State Area'
)

results = typesense.collections['companies'].documents.search(
  'q' => '*',
  'query_by'  => 'company_name',
  'filter_by' => 'num_employees:<100',
  'sort_by'   => 'num_employees:desc'
)
ap results

# {
#   "found"          => 0,
#   "hits"           => [],
#   "page"           => 1,
#   "search_time_ms" => 0
# }

Expected Behavior

I'm thinking we either allow q to be an optional parameter or support wildcards for q.

Actual Behavior

q is a required parameter and doesn't support wildcards, so we're unable to support the use case described above currently.

Metadata

Typsense Version: 0.8.0-rc1

OS: Docker

Dockcross gotchas

@jasonbosco

I digged into dockcross in more detail and found a couple of gotchas:

Binary portability

The image dockcross/linux-x64 is built off Debian Jessie which is a recent distro but that also makes any binary we build on that incompatible on slightly older distros like Ubuntu 14.04 (just like how a Java 6 program will run on Java 8 but not the other way around).

I tested this manually by trying to run the executable built from Dockcross on 14.04 - it failed saying that the glibc version is too old. When I searched online about what people do to get portable native builds on Linux - they specifically build the executable on an old distro so that the binaries would work on both old and newer ones.

So I went ahead and actually tried it - built it out of Ubuntu 12.04: https://github.com/wreally/typesense/pull/3/files#diff-57569576f2f06a3d823caa45afd685e1

And the executable ran fine on 14.04 and as well as 16.04.

Windows build isn't what it is

For Windows builds, they use WINE. I'm pretty sure that's going to be a little different from actualy Windows builds. If and when we want to support Windows, we need to build it on an actual Windows machine - there are no shortcuts here. Luckily CMake is well supported on Windows, but we would still need to do some work to get it going.

Summary

Given the 2 major gotchas above, dockcross isn't really going to help with either Linux or Windows build. I suggest we just go with this Ubuntu 12.04 based docker image for builds and ditch Dockcross.

Collision between /documents/id endpoint and /documents/export endpoint

Description

The possibility of this collision was in the back of my head when writing the API spec... Looks like it has indeed manifested itself in the implementation.

Currently, the URL to retrieve/delete a doc is /collections/{collectionName}/documents/{documentId}. So when you do a GET on /collections/{collectionName}/documents/export it says "document with ID export is not found" instead of considering /export an action endpoint.

Steps to reproduce

$ curl -X GET "http://localhost:8108/collections/companies/documents/export" -H  "accept: application/json" -H  "X-TYPESENSE-API-KEY: abcd" -sv
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8108 (#0)
> GET /collections/companies/documents/export HTTP/1.1
> Host: localhost:8108
> User-Agent: curl/7.54.0
> accept: application/json
> X-TYPESENSE-API-KEY: abcd
>
< HTTP/1.1 404 Not Found
< Date: Thu, 04 Jan 2018 15:55:50 GMT
< Connection: close
< Access-Control-Allow-Origin: *
< content-type: application/json; charset=utf-8
<
* Closing connection 0
{"message": "Could not find a document with id: export"}

Expected Behavior

It should export all documents in the collection.

Actual Behavior

It considers export an ID and says the doucment isn't found

Metadata

Typsense Version: 0.8-api-changes

Multiple filter_by conditions are not processed

Description

All the conditions given after "&&" filter_by separator are not being processed. The search results are same with or without those conditions. Example is taken from the typesense guide.

Steps to reproduce

Search Query --> http://localhost:8108/collections/books/documents/search?q=The&query_by=title&filter_by=publication_year:1925&&ratings_count:2683664

Expected Behaviour

When "&&" is changed to "," separator , result with the same query (in a custom local build)

oneresult

Actual Behaviour

Total results are 3 even when the condition is true for 1 result

2conditions

Metadata

Typsense Version: 0.9.2

OS: Ubuntu

Allow multiple query_by fields to hold same scoring weight, or specify weight per field

Description

Right now, according to docs:

The order of the fields is important: a record that matches on a field earlier in the list is considered more relevant than a record matched on a field later in the list.

Would be helpful to customize this, even in the simplest manner of multiple fields holding same weight, e.g. 'query_by' => [ [ 'text', 'author' ], 'description' ]
Seems like a pretty common case for a search box not to know what type the user is necessarily searching for.

Steps to reproduce

Expected Behavior

score multiple fields with same weight

Actual Behavior

no way to do this now

Metadata

Typsense Version: 0.9.1

OS: all

btw, typesense is very impressive! Great work! If I can contribute in any way, let me know, and I will try.

Paging max

Description

As noted in the documentation the limit for results is 500. Seems this is not a limit per query but overall?

Steps to reproduce

query page number beyond total of 500 initial, while total document > 500

Expected Behavior

One would assume that if there were 1,500 results found for example and you ask for page 2 with a per_page of 500, that it would return the next 500, leaving 500 more to return on page 3.

Actual Behavior

Asking for following page of a results set when page count * per_page would exceed 500, while the total results found > 500 returns:
"message": "Only the first 500 results are available."

Notes

Our documents are amalgams of data from transcripts that belong to single events. we are merging the search results for all of the timecodes for text found in the document search. there are millions of documents in the collection(s). so paging results as one would expect them to behave, does not. Is this paging behavior a bug?? if this is the expected logic then this is a major roadblock and not something one would expect from paging functionality. It would be extremely limiting to not be able to access records in a range beyond the first 500.

Typsense Version:
0.9.2
OS:
ubuntu server 18 in clusters

Install Typesense via deb and rpm

Description

Currently we only publish a tarball with just the executable. Having a proper deb and rpm would allow us to package a systemd/init.d script that makes administration/stop/start etc. easier.

Steps to reproduce

NA

Expected Behavior

We should be able to install from a deb/rpm and do service typesense start/stop/status/restart.

Actual Behavior

NA

Metadata

Typsense Version: 0.10.x

OS: all

performance with large number of collections

For example I have tens of thousands collections with thousands docs each.
How will this impact on
– index performance?
– search performance?
– memory usage?

How I can benchmark this?

[Internal Information] How is the data stored?

This isn't a bug, if it isn't the appropriate place or it, then feel free to close.

I have been interested in how search works for a while now, but haven't found any articles I found particularly approachable from 0 knowledge on the internals.

Would you be able to explain a little about how the data gets stored, and how that benefits the performance of a search?

I tried looking through the code a little bit to see if I could find the binary file formats but from a quick skim I was unable to find it.

Thanks.

Document that the minimum query string length should be 3 for results to be returned

Description

I tried searching for a 2 letter string in the document and it kept returning 0 results, which surprised me at first. We should probably document this in the docs, that the minimum search query length is 3 characters.

Steps to reproduce

# Two character query
$ curl -X GET "http://localhost:8108/collections/companies/documents/search?q=St&query_by=company_name" -H  "accept: application/json" -H  "X-TYPESENSE-API-KEY: abcd" -sv
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8108 (#0)
> GET /collections/companies/documents/search?q=St&query_by=company_name HTTP/1.1
> Host: localhost:8108
> User-Agent: curl/7.54.0
> accept: application/json
> X-TYPESENSE-API-KEY: abcd
>
< HTTP/1.1 200 OK
< Date: Thu, 04 Jan 2018 16:05:07 GMT
< Connection: keep-alive
< Access-Control-Allow-Origin: *
< content-type: application/json; charset=utf-8
< transfer-encoding: chunked
<
* Connection #0 to host localhost left intact
{"found":0,"hits":[],"page":1,"took_ms":0}

# Three character query
$ curl -X GET "http://localhost:8108/collections/companies/documents/search?q=Sta&query_by=company_name" -H  "accept: application/json" -H  "X-TYPESENSE-API-KEY: abcd" -sv
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8108 (#0)
> GET /collections/companies/documents/search?q=Sta&query_by=company_name HTTP/1.1
> Host: localhost:8108
> User-Agent: curl/7.54.0
> accept: application/json
> X-TYPESENSE-API-KEY: abcd
>
< HTTP/1.1 200 OK
< Date: Thu, 04 Jan 2018 16:06:33 GMT
< Connection: keep-alive
< Access-Control-Allow-Origin: *
< content-type: application/json; charset=utf-8
< transfer-encoding: chunked
<
* Connection #0 to host localhost left intact
{"facet_counts":[],"found":1,"hits":[{"_highlight":{"company_name":"<mark>Stark</mark> Industries"},"document":{"company_name":"Stark Industries","country":"USA","id":"124","num_employees":5215}}],"page":1,"took_ms":0}

Expected Output

The two string and three string query return the same results.

Actual Behavior

The two string query returns 0 results.

Metadata

Typsense Version: 0.8-api-changes

DELETEing a non-existent collection crashes the server

$ curl -X DELETE "http://localhost:8108/collections/nonexistent" -H  "accept: application/json" -H  "X-TYPESENSE-API-KEY: abcd" -sv
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8108 (#0)
> DELETE /collections/nonexistent HTTP/1.1
> Host: localhost:8108
> User-Agent: curl/7.54.0
> accept: application/json
> X-TYPESENSE-API-KEY: abcd
>
* Empty reply from server
* Connection #0 to host localhost left intact
$ docker run -p 8108:8108  -it -v/tmp/typesense-data/:/data -it typesense/typesense:0.8-api-changes --data-dir /data --api-key=abcd --listen-port 8108 --enable-cors
Typesense version nightly
Finished loading collections from disk.
Server has started. Ready to accept requests on port 8108
$

Version: 0.8-api-changes

Add support for CORS

@kishorenc I started writing a Swagger Spec for the API and noticed that the API is throwing a 404 for OPTIONS requests (triggered by the browser when Swagger Editor tries to call the Typesense API).

Could you add support for passing in a CORS whitelist, may be as a boot parameter?

v0.7.0

filter_by with facet string data is returning empty

Hi.

I'm experimenting typesense but I'm getting an unexpected behaviour.

Best,

Vinícius.

Description

Using the companies example, a query with filter_by country is returning empty if the field is facet. Is this the expected behavior?

Steps to reproduce

Companies example taken from here: https://typesense.org/0.9.2/api/

Query using filter_by:
localhost:8108/collections/companies/documents/search?q=stark&query_by=company_name&filter_by=country:USA

Expected Behavior

{"facet_counts":[1],"found":1,"hits":[{"document":{"company_name":"Stark Industries","country":"USA","id":"124","num_employees":5215},"highlights":[{"field":"company_name","snippet":"Stark Industries"}],"seq_id":0}],"page":1,"search_time_ms":0}

Actual Behavior

{"found":0,"hits":[],"page":1,"search_time_ms":0}

Metadata

Typsense Version: 0.92

OS: macOS 10.12.6

Ruby client

Ruby client for interacting with Typesense API.

Log searches for analytics

Is there a way to create a log of searches, chosen result and other metrics like date / tine of search, to help with analysing what people are searching for?

Ability to pass arguments via config file

Description

Need a way to pass in options via a config file rather than the command line. This is a blocker for #40.

Steps to reproduce

NA

Expected Behavior

NA

Actual Behavior

NA

Metadata

Typsense Version: 0.9.0 and below

OS: all

Highlight matched query text in string[] fields

Description

Currently, matched text from a query is highlight only for plain string fields. For example, if title is a string field, and someone queries for harry, the result will be highlighted as follows:

"highlight": {
  "title": "<mark>Harry</mark> Potter and the Goblet of Fire"
}

The same should extend to a field of type string[].

Steps to reproduce

  1. Index some text into string[] fields.
  2. Try searching for some text against the string[] field.
  3. Observe the highlight field in the response.

Expected Behavior

On querying for rowling on an authors string[] field, highlighting of matched query text should be supported:

"highlight": {
  "authors": "J K <mark>Rowling</mark>"
}

Actual Behavior

"highlight": { }

Metadata

Typsense Version: 0.8.0

OS: all

30 minute server startup time.

Description

Very long startup time, at this point 30 minutes. I have around 150k documents spread across 20 collections or so. The data schema is the same in all of them. Document sizes range from 5 -> 200KB depending on if the document has a transcript.
The documents contain an array of strings with transcript text for each array item, and an array of int32 to store the timecodes associated with each line.

Initially I experimented with storing each transcript line in its own document but since each transcript belongs to a single event, without an aggregate method, it made the data unusable. It produced about 5M document records in all.

documents are added all of the time and not removed, they are synced with our backend that creates the data to fit the schema as event records are updated in our main system.

I have a cluster in a autoscaling group but because of the long startup times taking so long, it makes multiple instances impossible to sustain.

Q. is there a tweak to get this to start up faster?
Q. is it the data itself? can there be a better way to store this data to match results to a single event?
Q. Is my use-case way out of scope for typesense?

Steps to reproduce

example of one of the larger documents
https://search.invintus.com/collections/9375922947/documents/search/?q=State&query_by=transcriptLines,description,title,agendaTitles,agendaDescriptions&sort_by=eventDateTime:asc&per_page=1

Schema:

"fields": [
            {
                "facet": false,
                "name": "clientID",
                "type": "string"
            },
            {
                "facet": false,
                "name": "eventID",
                "type": "string"
            },
            {
                "facet": false,
                "name": "title",
                "type": "string"
            },
            {
                "facet": false,
                "name": "description",
                "type": "string"
            },
            {
                "facet": false,
                "name": "status",
                "type": "string"
            },
            {
                "facet": true,
                "name": "categories",
                "type": "string[]"
            },
            {
                "facet": false,
                "name": "thumb",
                "type": "string"
            },
            {
                "facet": false,
                "name": "agendaTimes",
                "type": "int32[]"
            },
            {
                "facet": false,
                "name": "agendaTitles",
                "type": "string[]"
            },
            {
                "facet": false,
                "name": "agendaDescriptions",
                "type": "string[]"
            },
            {
                "facet": false,
                "name": "eventDateTime",
                "type": "int32"
            },
            {
                "facet": false,
                "name": "runtimeMinutes",
                "type": "int32"
            },
            {
                "facet": false,
                "name": "transcriptTimes",
                "type": "int32[]"
            },
            {
                "facet": false,
                "name": "transcriptLines",
                "type": "string[]"
            }
        ]

Expected Behavior

hopefully much shorter than 30 minutes to read from disk

Actual Behavior

2019/01/27 10:19:38 932233      INFO [typesense_server_utils.cpp->run_server:69]        Starting Typesense 0.9.2
2019/01/27 10:19:40 459019      INFO [typesense_server_utils.cpp->run_server:79]        Loading collections from disk...
2019/01/27 10:49:49 654728      INFO [typesense_server_utils.cpp->run_server:86]        Finished loading collections from disk.
2019/01/27 10:49:49 657052      INFO [http_server.cpp->run:144] Typesense has started. Ready to accept requests on port 8108

Metadata

Typsense Version: 0.9.2

OS: ubuntu server 18.x

hardware: AWS EC2 r5a.xlarge  
data resides on EFS mount

Unexpected during startup: Error while initializing store: Corruption: While creating a new Db, wal_dir contains existing log file

Description

While testing #32, I pulled the latest docker container and tried starting it up, but it crashed with this message:

2066/10/14 12:12:22 321077	ERROR [store.h->Store:76]	Error while initializing store: Corruption: While creating a new Db, wal_dir contains existing log file: : 000003.log

Steps to reproduce

Unfortunately, I couldn't replicate this consistently.

But here's a screengrab of the terminal when the issue happened for the first time:

Start the container and then press Ctrl + C

$ docker run -p 8108:8108 -v/tmp/typesense-data:/data typesense/typesense:highlight_all_fields   --data-dir /data --api-key=abcd --enable-cors
2066/10/05 17:30:47 003042	INFO [typesense_server.cpp->main:115]	Starting Typesense highlight_all_fields
2066/10/05 17:30:47 005276	INFO [typesense_server.cpp->main:123]	Loading collections from disk...
2066/10/05 17:30:47 182179	INFO [typesense_server.cpp->main:131]	Finished loading collections from disk.
2066/10/05 17:30:47 182992	INFO [http_server.cpp->run:144]	Typesense has started. Ready to accept requests on port 8108
^C2066/10/06 19:13:08 857310	INFO [typesense_server.cpp->catch_interrupt:18]	Stopping Typesense server...

Pull the latest container

typesense-core (master)$ docker pull typesense/typesense:highlight_all_fields
highlight_all_fields: Pulling from typesense/typesense
af49a5ceb2a5: Already exists
8f9757b472e7: Already exists
e931b117db38: Already exists
47b5e16c0811: Already exists
9332eaf1a55b: Already exists
a9acec172a9c: Pull complete
7e1ce622e8d2: Pull complete
Digest: sha256:280ebb6e1af743f938e61db8506dc9ea3df781953f8f2218da7ad17bfa9732c5
Status: Downloaded newer image for typesense/typesense:highlight_all_fields

Start it up again

typesense-core (master)$ docker run -p 8108:8108 -v/tmp/typesense-data:/data typesense/typesense:highlight_all_fields   --data-dir /data --api-key=abcd --enable-cors
2066/10/14 12:12:22 283031	INFO [typesense_server.cpp->main:115]	Starting Typesense highlight_all_fields
2066/10/14 12:12:22 283763	INFO [typesense_server.cpp->main:123]	Loading collections from disk...
2066/10/14 12:12:22 321077	ERROR [store.h->Store:76]	Error while initializing store: Corruption: While creating a new Db, wal_dir contains existing log file: : 000003.log
2066/10/14 12:12:22 322892

***** FATAL SIGNAL RECEIVED *******
Received fatal signal: SIGSEGV(11)	PID: 1

***** SIGNAL SIGSEGV(11)

*******	STACKDUMP *******
	stack dump [1]  /opt/typesense-server() [0x6a3288]
	stack dump [2]  /lib/x86_64-linux-gnu/libpthread.so.0+0x113e0 [0x7f1e729643e0]

	stack dump [3]  /opt/typesense-server : Store::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&) const+0x28 [0x5a3c98]

	stack dump [4]  /opt/typesense-server : CollectionManager::init(Store*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x98 [0x5a8e78]
	stack dump [5]  /opt/typesense-servermain+0xc7b [0x5f7fdb]
	stack dump [6]  /lib/x86_64-linux-gnu/libc.so.6__libc_start_main+0xf0 [0x7f1e71e95830]
	stack dump [7]  /opt/typesense-server() [0x567211]

Exiting after fatal event  (FATAL_SIGNAL). Fatal type:  SIGSEGV
Log content flushed sucessfully to sink


2066/10/14 12:12:22 322892

***** FATAL SIGNAL RECEIVED *******
Received fatal signal: SIGSEGV(11)	PID: 1

***** SIGNAL SIGSEGV(11)

*******	STACKDUMP *******
	stack dump [1]  /opt/typesense-server() [0x6a3288]
	stack dump [2]  /lib/x86_64-linux-gnu/libpthread.so.0+0x113e0 [0x7f1e729643e0]

	stack dump [3]  /opt/typesense-server : Store::get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&) const+0x28 [0x5a3c98]

	stack dump [4]  /opt/typesense-server : CollectionManager::init(Store*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x98 [0x5a8e78]
	stack dump [5]  /opt/typesense-servermain+0xc7b [0x5f7fdb]
	stack dump [6]  /lib/x86_64-linux-gnu/libc.so.6__libc_start_main+0xf0 [0x7f1e71e95830]
	stack dump [7]  /opt/typesense-server() [0x567211]

Exiting after fatal event  (FATAL_SIGNAL). Fatal type:  SIGSEGV
Log content flushed sucessfully to sink




exitWithDefaultSignalHandler:238. Exiting due to FATAL_SIGNAL, 11

typesense-core (master)$

Expected Behavior

The server doesn't crash on startup. It seems like the server crashes/exits and then cleans-up after itself based on this message from the logs: Log content flushed sucessfully to sink. Can we do this cleanup without the server crashing?

Actual Behavior

The server crashes and stops.

Metadata

Typsense Version: typesense/typesense:highlight_all_fields

OS: docker

Possible to synchronise changes between a remote database and Typesense Database

Description

Hello ! I was wondering if there's is a way to synchronise the changes in typesense database from a remote server. For example , mongo-connector does it for elastic search and mongodb.

In my current scenario, I have to import the whole data after a regular interval of time to index the changes which basically means dropping the collection and importing the whole table again which is inefficient.

PS. Great work so far - really a good alternative which is light and fast.

How to get more than 10 facets

Description

Hi,
I would like to know know how can I increase the number of facets returned by query. Currently, I am only getting 10 facets per query.
facet_by_limit
In my search query, I should get more than 10 facets.

Expected Behavior

When I search for something it should return all possible facets or it should return the exact amount of facets which I return in search_parameter if there's any parameter like this.

Actual Behavior

It only return first 10 facets.

Metadata

Typsense Version: 0.10.0

OS: Ubuntu

Thanks

How to retrieve available facet/filter options

Hello,
First of all amazing work. I really appreciate your hard work. I would like to know how can I get info about how many filter/facet left after searched query.
Let me explain by steps:
Let's say I have eCommerce store and we provide search, product type filter, brand facet etc. Type filter can have Mobile, Laptop, PC, Furniture, Food etc. And facet will have brand names.

Now when someone search apple, I want to remove Furniture & food from my filters(based upon my search filter). Currently I can't find any way to do this. Same goes for facets(If I filter mobile How can I get list of available brands from queried data). Because If I hard-code all filters and If they are not available upon certain search query but still visible in front-end than that will be unexpected behaviour(clicking on filter won't affect at all).

Something like this:
algolia-filter-auto-hide

It will be plus if I can also show how much data/item/product each facet/filter have.
algolia-facet

Metadata

Typsense Version: 0.10.0

OS: Ubuntu

Language: Python

How to hide "total rows"

That's work nice, but i don't know and i haven't found how to hide "Total rows", a text that display total of rows saved in the db.

Rename `took_ms` field in search result

Description

The took_ms field in the Search Result seems a little odd for a name by itself. How about we make it a little more descriptive like search_time? We can catalog that it's in milliseconds in the docs.

Metadata

Typsense Version: 0.8-api-changes

Documents with duplicate IDs

Description

Looks like currently you can create multiple documents with the same ID. Was this intended?

Then when you try to retrieve a document (with duplicate IDs), only the last document is returned. Although the search endpoint retrieves all the documents.

Steps to reproduce

$ curl "${TYPESENSE_HOST}/collections" \
       -X POST \
       -H "Content-Type: application/json" \
       -H "X-TYPESENSE-API-KEY: ${TYPESENSE_API_KEY}" \
       -d '{
             "name": "companies",
             "fields": [
               {"name": "company_name", "type": "string" },
               {"name": "num_employees", "type": "int32" },
               {"name": "country", "type": "string", "facet": true }
             ],
             "token_ranking_field": "num_employees"
          }' 

{
  "name": "companies",
  "num_documents": 0,
  "fields": [
    {"name": "company_name", "type": "string" },
    {"name": "num_employees", "type": "int32" },
    {"name": "country", "type": "string", "facet": true }
  ],
  "token_ranking_field": "num_employees"
}

$ curl "${TYPESENSE_HOST}/collections/companies/documents" \
       -X POST \
       -H "Content-Type: application/json" \
       -H "X-TYPESENSE-API-KEY: ${TYPESENSE_API_KEY}" \
       -d '{
            "id": "124",
            "company_name": "Stark Industries",
            "num_employees": 5215,
            "country": "USA"
          }'

{
  "id": "124",
  "company_name": "Stark Industries",
  "num_employees": 5215,
  "country": "USA"
}

$ curl "${TYPESENSE_HOST}/collections/companies/documents" \
       -X POST \
       -H "Content-Type: application/json" \
       -H "X-TYPESENSE-API-KEY: ${TYPESENSE_API_KEY}" \
       -d '{
            "id": "124",
            "company_name": "Stark Industries",
            "num_employees": 5215,
            "country": "USA"
          }'

{
  "id": "124",
  "company_name": "Stark Industries",
  "num_employees": 5215,
  "country": "USA"
}

Expected Behavior

May be error out when a document with the same ID already exists?

Actual Behavior

Allows documents with the same ID to be indexed.

Metadata

Typsense Version: 0.8.0

OS: Ubuntu inside Docker

possible to insert documents in bulk

Description

Hello!, on bootstrapping a server, I was wondering if it would be possible to provide an endpoint to insert collection of documents in one api call? For example, after exporting a collection, if I would like to import it into another typesense server.

In my specific case, I am creating an index for a website, than would like to insert all at once, rather than making successive calls.

PS. Great work so far - really liking playing around.

Ability to restrict fields returned

Description

Currently, the entire index document is returned in the search response. Sometimes, it's desirable to have only a specific set of fields returned for performance reasons. Ideally, a whitelist and blacklist of fields would be desirable.

Steps to reproduce

N/A

Expected Behavior

N/A

Actual Behavior

N/A

Metadata

Typsense Version: all

OS: all

How to make it distributed?

Description

I'd like to improve the number of queries per second by having a distributed system using round-robin. As far as I understood from your guide, the read replica is used only if there is a timeout. However, I'd like to have the queries passed to the next free read replica automatically. Then, if the query fails by timeout, it is passed to the next free replica as well.

Expected Behavior

An example is below.

  1. A master node knows 3 read replicas. The master has a queue. All Replicas are available.
  2. Query A arrives and is pushed into the queue.
  3. The master sends A to Replica 1 and waits for the answer or the timeout.
  4. Query B arrives and is pushed into the queue.
  5. The master sends B to Replica 2 and waits for the answer or the timeout.
  6. Replica 2 answers and the master returns the result to the user. Now the master knows Replica 2 is free.
  7. Replica 1 fails. The master now sends A to the next free Replica. Replica 1 should be marked as unavailable?
  8. Query C arrives and is pushed into the queue.
  9. The master sends C to the next free Replica (3 ?) and waits for the timeout.
  10. etc.

How to run on a Wordpress site (MySQL)?

Description

Neither the stuff on Github nor the 'detailed guide' on the website quite talks about what one needs to do to create the data index, from which TypeSense queries. Do we have to create multiple JSONs as documented in the example? Is there a Wordpress plugin to do the same?

Steps to reproduce

None. It's a question.

Expected Behavior

Expect TypeSense to have some smart mechanisms to easily automate the creation of indexes by sections of a typical website or app. Also, it's hard to come by examples of what one needs to do to tweak CSS etc for altering the look of the default search.

Actual Behavior

Does none of the above. Even the guide is super simple.

Metadata

Latest.

OS:

Linux CentOS on the server.

num_documents & facet is not returned when creating a new collection

Description

When a new collection is created, the response only includes the posted creation schema and so doesn't have the num_documents and facet fields. But when you retrieve the collection, the fields shows up alright.

Ideally, we'd want the output of schema creation and schema retrieval to be identical.

Steps to reproduce

Found this out as I was writing examples with outputs for the Ruby client:

##
# Create a collection

schema = {
  'name'      => 'companies',
  'fields'    => [
    {
      'name' => 'company_name',
      'type' => 'string'
    },
    {
      'name'  => 'num_employees',
      'type'  => 'int32'
    },
    {
      'name'  => 'country',
      'type'  => 'string',
      'facet' => true
    }
  ],
  'default_sorting_field' => 'num_employees'
}

collection = typesense.collections.create(schema)
ap collection

# {
#   "name"                  => "companies",
#   "fields"                => [
#     [0] {
#       "name" => "company_name",
#       "type" => "string"
#     },
#     [1] {
#       "name" => "num_employees",
#       "type" => "int32"
#     },
#     [2] {
#       "name"  => "country",
#       "type"  => "string",
#       "facet" => true
#     }
#   ],
#   "default_sorting_field" => "num_employees"
# }


##
# Retrieve a collection

collection = typesense.collections['companies'].retrieve
ap collection

# {
#   "default_sorting_field" => "num_employees",
#   "fields"                => [
#     [0] {
#       "facet" => false, # Notice the facet field here, but not in the response above
#       "name"  => "company_name",
#       "type"  => "string"
#     },
#     [1] {
#       "facet" => false, # Notice the facet field here, but not in the response above
#       "name"  => "num_employees",
#       "type"  => "int32"
#     },
#     [2] {
#       "facet" => true,
#       "name"  => "country",
#       "type"  => "string"
#     }
#   ],
#   "name"                  => "companies",
#   "num_documents"         => 0 # Notice num_documents here, but not in the response above
# }

Expected Behavior

num_documents and facet are returned in the output of collection creation.

Actual Behavior

num_documents and facet are missing in the output of collection creation.

Metadata

Typsense Version: 0.8.0-rc1

OS: Docker

Return all fields that match in the highlight section of search results

Description

When searching for a query in multiple fields in a document, currently, the search results only return one field in the highlight section. We ideally want to return all fields that match in the highlight section.

Steps to reproduce

# Create a users schema:
# {
#   "name": "users",
#   "fields": [
#     {
#       "name": "full_name",
#       "type": "string"
#     },
#     {
#       "name": "shipping_address",
#       "type": "string"
#     },
#     {
#       "name": "billing_address",
#       "type": "string"
#     },
#     {
#       "name": "customer_id",
#       "type": "int32"
#     }
#   ],
#   "default_sorting_field": "customer_id"
# }

$ curl -X POST "http://localhost:8108/collections" -H  "accept: application/json" -H  "X-TYPESENSE-API-KEY: abcd" -H  "Content-Type: application/json" -d "{  \"name\": \"users\",  \"fields\": [    {      \"name\": \"full_name\",      \"type\": \"string\"    },    {      \"name\": \"shipping_address\",      \"type\": \"string\"    },    {      \"name\": \"billing_address\",      \"type\": \"string\"    },    {      \"name\": \"customer_id\",      \"type\": \"int32\"    }  ],  \"default_sorting_field\": \"customer_id\"}"

# Index a user:
# {
#   "customer_id": 124,
#   "full_name": "Tony Stark",
#   "billing_address": "10880 Malibu Point, Malibu, CA 90265",
#   "shipping_address": "10880 Malibu Point, Malibu, CA 90265"
# }

$ curl -X POST "http://localhost:8108/collections/users/documents" -H  "accept: application/json" -H  "X-TYPESENSE-API-KEY: abcd" -H  "Content-Type: application/json" -d "{  \"customer_id\": 124,  \"full_name\": \"Tony Stark\",  \"billing_address\": \"10880 Malibu Point, Malibu, CA 90265\",  \"shipping_address\": \"10880 Malibu Point, Malibu, CA 90265\"}"

# Now search for users with the word "Malibu" in their shipping or billing address:

$ curl -X GET "http://localhost:8108/collections/users/documents/search?q=Malibu&query_by=shipping_address%2Cbilling_address" -H  "accept: application/json" -H  "X-TYPESENSE-API-KEY: abcd"

# Result:
# {
#   "facet_counts": [],
#   "found": 1,
#   "hits": [
#     {
#       "document": {
#         "billing_address": "10880 Malibu Point, Malibu, CA 90265",
#         "customer_id": 124,
#         "full_name": "Tony Stark",
#         "id": "1",
#         "shipping_address": "10880 Malibu Point, Malibu, CA 90265"
#       },
#       "highlight": {
#         "field": "shipping_address",
#         "snippet": "10880 <mark>Malibu</mark> Point, Malibu, CA 90265"
#       }
#     }
#   ],
#   "page": 1,
#   "search_time_ms": 0
# }

Expected Behavior

  • The result should include both shipping address and billing address in the highlight section.
  • The result should also <mark> all occurrences of the word in the field.

Actual Behavior

  • The result only includes shipping address in the highlight section.
  • The highlight only marks the first occurrence of the query.

Metadata

Typsense Version: typesense/typesense:wildcard_query_support

OS/Platform: Docker Image

obtaining match score in results?

Is there any way to obtain the internal scores of the level of the match for each result from the ${TYPESENSE_HOST}/collections/:collection/documents/search endpoint? (e.g. if there is a way to get back something like {id: "124", ... , _score: 0.98} for each result?)

Demo not working on typesense.org

Description

The demo search has stopped working

Steps to reproduce

visit https://typesense.org/ and try the demo

Expected Behavior

A search result should appear

Actual Behavior

nothing appears

Metadata

Tried various browsers. In Chrome console I'm seeing: Failed to load resource: net::ERR_CONNECTION_REFUSED

Proper tagging to docker images

Docker hub images seems to have commit messages than semantic tags.

An example is with the default pull command as it looks for latest image like the following:
docker pull typesense/typesense

Obviously one could look at the the tags and fetch the proper tag but then again the tags don't seem to have a proper versioning and having a default latest tag doesn't hurt.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.