Git Product home page Git Product logo

elastic / elasticsearch Goto Github PK

View Code? Open in Web Editor NEW
67.7K 2.7K 24.1K 1.13 GB

Free and Open, Distributed, RESTful Search Engine

Home Page: https://www.elastic.co/products/elasticsearch

License: Other

Shell 0.05% Python 0.01% Java 99.54% HTML 0.01% Groovy 0.23% Batchfile 0.01% Emacs Lisp 0.01% FreeMarker 0.01% ANTLR 0.03% Dockerfile 0.01% CSS 0.01% PowerShell 0.01% JavaScript 0.01% Mustache 0.01% AMPL 0.01% StringTemplate 0.08% TypeScript 0.01% MDX 0.01% C 0.01%
elasticsearch java search-engine

elasticsearch's Introduction

Elasticsearch

Elasticsearch is a distributed search and analytics engine optimized for speed and relevance on production-scale workloads. Elasticsearch is the foundation of Elastic’s open Stack platform. Search in near real-time over massive datasets, perform vector searches, integrate with generative AI applications, and much more.

Use cases enabled by Elasticsearch include:

... and more!

To learn more about Elasticsearch’s features and capabilities, see our product page.

To access information on machine learning innovations and the latest Lucene contributions from Elastic, more information can be found in Search Labs.

Get started

The simplest way to set up Elasticsearch is to create a managed deployment with Elasticsearch Service on Elastic Cloud.

If you prefer to install and manage Elasticsearch yourself, you can download the latest version from elastic.co/downloads/elasticsearch.

Run Elasticsearch locally

To try out Elasticsearch on your own machine, we recommend using Docker and running both Elasticsearch and Kibana. Docker images are available from the Elastic Docker registry.

Note
Starting in Elasticsearch 8.0, security is enabled by default. The first time you start Elasticsearch, TLS encryption is configured automatically, a password is generated for the elastic user, and a Kibana enrollment token is created so you can connect Kibana to your secured cluster.

For other installation options, see the Elasticsearch installation documentation.

Start Elasticsearch

  1. Install and start Docker Desktop. Go to Preferences > Resources > Advanced and set Memory to at least 4GB.

  2. Start an Elasticsearch container:

    docker network create elastic
    docker pull docker.elastic.co/elasticsearch/elasticsearch:{version} (1)
    docker run --name elasticsearch --net elastic -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -t docker.elastic.co/elasticsearch/elasticsearch:{version}
    1. Replace {version} with the version of Elasticsearch you want to run.

      When you start Elasticsearch for the first time, the generated elastic user password and Kibana enrollment token are output to the terminal.

      Note
      You might need to scroll back a bit in the terminal to view the password and enrollment token.
  3. Copy the generated password and enrollment token and save them in a secure location. These values are shown only when you start Elasticsearch for the first time. You’ll use these to enroll Kibana with your Elasticsearch cluster and log in.

Start Kibana

Kibana enables you to easily send requests to Elasticsearch and analyze, visualize, and manage data interactively.

  1. In a new terminal session, start Kibana and connect it to your Elasticsearch container:

    docker pull docker.elastic.co/kibana/kibana:{version} (1)
    docker run --name kibana --net elastic -p 5601:5601 docker.elastic.co/kibana/kibana:{version}
    1. Replace {version} with the version of Kibana you want to run.

      When you start Kibana, a unique URL is output to your terminal.

  2. To access Kibana, open the generated URL in your browser.

    1. Paste the enrollment token that you copied when starting Elasticsearch and click the button to connect your Kibana instance with Elasticsearch.

    2. Log in to Kibana as the elastic user with the password that was generated when you started Elasticsearch.

Send requests to Elasticsearch

You send data and other requests to Elasticsearch through REST APIs. You can interact with Elasticsearch using any client that sends HTTP requests, such as the Elasticsearch language clients and curl. Kibana’s developer console provides an easy way to experiment and test requests. To access the console, go to Management > Dev Tools.

Add data

You index data into Elasticsearch by sending JSON objects (documents) through the REST APIs. Whether you have structured or unstructured text, numerical data, or geospatial data, Elasticsearch efficiently stores and indexes it in a way that supports fast searches.

For timestamped data such as logs and metrics, you typically add documents to a data stream made up of multiple auto-generated backing indices.

To add a single document to an index, submit an HTTP post request that targets the index.

POST /customer/_doc/1
{
  "firstname": "Jennifer",
  "lastname": "Walters"
}

This request automatically creates the customer index if it doesn’t exist, adds a new document that has an ID of 1, and stores and indexes the firstname and lastname fields.

The new document is available immediately from any node in the cluster. You can retrieve it with a GET request that specifies its document ID:

GET /customer/_doc/1

To add multiple documents in one request, use the _bulk API. Bulk data must be newline-delimited JSON (NDJSON). Each line must end in a newline character (\n), including the last line.

PUT customer/_bulk
{ "create": { } }
{ "firstname": "Monica","lastname":"Rambeau"}
{ "create": { } }
{ "firstname": "Carol","lastname":"Danvers"}
{ "create": { } }
{ "firstname": "Wanda","lastname":"Maximoff"}
{ "create": { } }
{ "firstname": "Jennifer","lastname":"Takeda"}

Search

Indexed documents are available for search in near real-time. The following search matches all customers with a first name of Jennifer in the customer index.

GET customer/_search
{
  "query" : {
    "match" : { "firstname": "Jennifer" }
  }
}

Explore

You can use Discover in Kibana to interactively search and filter your data. From there, you can start creating visualizations and building and sharing dashboards.

To get started, create a data view that connects to one or more Elasticsearch indices, data streams, or index aliases.

  1. Go to Management > Stack Management > Kibana > Data Views.

  2. Select Create data view.

  3. Enter a name for the data view and a pattern that matches one or more indices, such as customer.

  4. Select Save data view to Kibana.

To start exploring, go to Analytics > Discover.

Upgrade

To upgrade from an earlier version of Elasticsearch, see the Elasticsearch upgrade documentation.

Build from source

Elasticsearch uses Gradle for its build system.

To build a distribution for your local OS and print its output location upon completion, run:

./gradlew localDistro

To build a distribution for another platform, run the related command:

./gradlew :distribution:archives:linux-tar:assemble
./gradlew :distribution:archives:darwin-tar:assemble
./gradlew :distribution:archives:windows-zip:assemble

To build distributions for all supported platforms, run:

./gradlew assemble

Distributions are output to distribution/archives.

To run the test suite, see TESTING.

Documentation

For the complete Elasticsearch documentation visit elastic.co.

For information about our documentation processes, see the docs README.

Examples and guides

The elasticsearch-labs repo contains executable Python notebooks, sample apps, and resources to test out Elasticsearch for vector search, hybrid search and generative AI use cases.

Contribute

For contribution guidelines, see CONTRIBUTING.

Questions? Problems? Suggestions?

  • To report a bug or request a feature, create a GitHub Issue. Please ensure someone else hasn’t created an issue for the same topic.

  • Need help using Elasticsearch? Reach out on the Elastic Forum or Slack. A fellow community member or Elastic engineer will be happy to help you out.

elasticsearch's People

Contributors

benwtrent avatar bleskes avatar cbuescher avatar clintongormley avatar colings86 avatar dadoonet avatar dakrone avatar davecturner avatar dimitris-athanasiou avatar dnhatn avatar droberts195 avatar imotov avatar jasontedor avatar javanna avatar jaymode avatar jimczi avatar jpountz avatar jrodewig avatar kimchy avatar lcawl avatar mark-vieira avatar martijnvg avatar nik9000 avatar original-brownbear avatar rjernst avatar rmuir avatar s1monw avatar spinscale avatar tlrx avatar ywelsch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elasticsearch's Issues

JSON object properties are not positional

Hiya

It looks like you are parsing JSON as a stream, where position matters, instead of parsing the whole objects before analysing them. For instance, this works:

curl -XGET 'http://127.0.0.2:9200/_all/_search'  -d '
{ query: 
    {
       "filteredQuery" : {
          "query" : {
             "term" : {
                "text" : "foo"
             }
          },
          "filter" : {
             "range" : {
                "num" : {
                   "from" : 10,
                   "to" : 20
                }
             }
          }
       }
   }
}
'

But this fails:
curl -XGET 'http://127.0.0.2:9200/_all/_search' -d '
{ query:
{
"filteredQuery" : {
"filter" : {
"range" : {
"num" : {
"to" : 20,
"from" : 10
}
}
},
"query" : {
"term" : {
"text" : "foo"
}
}
}
}
}
'

A JSON parser should consider these two structures to be identical, which is also the thing that rules out having non-unique property names.

thanks

Clint

(Edited to correct JSON)

Mapping Overhaul - More user friendly, cluster aware

  1. When adding mapping definitions, they are now merged with the current mapping definitions. Duplicates are silently ignore unless specified in the put mapping API (HTTP param ignoreDuplicates set to false). Note, duplicates refer only to field mapping, object mapping are recursively checked.
  2. Mappings definitions are now clustered. Up until now, when creating mapping, the mappings were not merged (see point 1) but they were broadcasted to the cluster. But, when a document indexed resulted in updated mapping, this fact was lost. Now, this changed mapping are updated on the master and merged, and broadcasted to the whole cluster, which means all the different nodes will know about the new types introduced almost immediately.

JSON Implementation is not standard

Among other problems, I have noticed the following:

  1. Floating points in quotes, such as "1.0" causes null pointer exceptions when sent as document fields. These should just be treated as a regular string. Here is the error message:
    "error" : "Index[...] Shard[1] ; nested: Failed to parse; nested: Current token (VALUE_STRING) not numeric, can not use numeric value accessors\n at [Source: {..., "f": "1.0",...}; line: 1, column: 32]; "
  2. Integers in status response are placed in strings. This is more an annoying detail, than a bug.

I tested this with json generated in the python standard standard package, which is confirmed to be compatible with in-browser json and php json implementations. From the tracebacks I have seen, the problem is either in the JsonObectMapper or possibly the flexjson package, but I haven't looked into it in detail.

Let me know if you need some more examples.

Thanks for the great project!

Discovery: Support local (JVM level) discovery

Allow to have a JVM (well, actually class loader) level discovery for simple testing / embedding of a single node (which, potentially exists with other nodes in the same class loader).

Enable it using:

discovery:
    type: local

Or using:

node:
    local: true

(which will also enable other modules to be local, such as the transport - once we have that...)

Query DSL: Field Query

A new query, field query, which is very similar to the queryString query, except it works on a single field. It has the same parameters as the queryString (except for the defaultField). The idea is to make querying against a single field using the query string sytax (fuzzy, boolean, phrase, and so on).

Sample 1:

{
    field : { age : 34 }
}

Sample 2:

{
    field : { "name.first" : "+shay -kimchy" }
}

Sample 3:

{
    field : { 
        "name.first" : { 
            query : "+shay -kimchy", 
            boost : 2.0, 
            enablePositionIncrements : false 
        } 
    }
}

Query http listeners

In the same way as you can find out if a node is a data node or not, it'd be good to tell if a node has http enabled or not.

One of the things I'd like to be able to do, is to query one node about
the other http enabled nodes in the cluster (in the same was as you can
find out which nodes are data nodes)

In other words, one of my clients starts up, queries the 'main' node
about which listeners are available, then randomly selects one of those
nodes.

The idea is to spread the load between the nodes, and also, if a node
goes down, then my client already has a list of other nodes that it can
try connecting to.

thanks

Clint

Transport: Support local (JVM level) transport

Allow to have a JVM (well, actually class loader) level transport for simple testing / embedding of a single node (which, potentially exists with other nodes in the same class loader).

Enable it using:

transport:
    type: local

Or using:

node:
    local: true

(which will also enable other modules to be local, such as the discovery)

Query DSL: queryString - allow to run against multiple fields

The current queryString requires to define the defaultField in order to run the query against a default field. Running the same query string against several "default" fields can be done by combining them either with bool or disMax queries, but its cumbersome.

queryString should support providing a "fields" field, where a list of fields the query will run against will be provided. An option to choose if the query will be combined using disMax or bool should be a simple flag. A tieBreaker field allowed when using disMax.

Sample 1:

{
    queryString : {
        fields : ["content", "name"],
        useDisMax : false,
        query: "test"
    }
}

Sample 2:

{
    queryString : {
        fields : ["content", "name"],
        useDisMax : true,
        query: "test"
    }
}

Boosting per field should be allowed, for example:

{
    queryString : {
        fields : ["content^1.4", "name"],
        query: "test"
    }
}

MoreLikeThis API: Search documents that are "like" the specified document

URI is: {index}/{type}/{id}/_moreLikeThis
Method: GET and POST

It translates into a search request with an mlt query (using the mlt query DSL). All mlt parameters are extracted from the http parameters.

The body of the request can optionally include the typical search request body (facets, from, size, ...).

Creating a duplicate mapping throws the whole cluster

Hiya

I'm starting 3 nodes, then creating a mapping, then creating a duplicate (with ignoreDuplicate=false).

It is sufficient to throw the whole cluster out. It doesn't seem to recover.

Run this script a few times, and watch the server logs:

#!/bin/bash
curl -XGET 'http://127.0.0.1:9200/_cluster/nodes' 
curl -XDELETE 'http://127.0.0.2:9202/es_test_1/' 
curl -XPUT 'http://127.0.0.2:9202/es_test_1,es_test_2/test/_mapping?ignoreDuplicates=false'  -d '
{
   "properties" : {
      "num" : {
         "type" : "integer"
      },
      "text" : {
         "type" : "string"
      }
   }
}
'
curl -XPUT 'http://127.0.0.2:9202/es_test_1,es_test_2/test/_mapping?ignoreDuplicates=false'  -d '
{
   "properties" : {
      "num" : {
         "type" : "integer"
      },
      "text" : {
         "type" : "string"
      }
   }
}
'
curl -XPUT 'http://127.0.0.2:9202/es_test_1,es_test_2/test_2/_mapping?ignoreDuplicates=false'  -d '
{
   "properties" : {
      "num" : {
         "type" : "integer"
      },
      "text" : {
         "type" : "string"
      }
   }
}
'
curl -XDELETE 'http://127.0.0.2:9202/es_test_1/' 
curl -XDELETE 'http://127.0.0.2:9202/es_test_2/' 
curl -XPUT 'http://127.0.0.2:9202/es_test_1/'  -d '
{}
'
curl -XPUT 'http://127.0.0.2:9202/es_test_2/'  -d '
{}
'
curl -XPUT 'http://127.0.0.2:9202/_all/type_1/_mapping?ignoreDuplicates=false'  -d '
{
   "properties" : {
      "num" : {
         "store" : "yes",
         "type" : "integer"
      },
      "text" : {
         "store" : "yes",
         "type" : "string"
      }
   }
}
'
curl -XPUT 'http://127.0.0.2:9202/_all/type_2/_mapping?ignoreDuplicates=false'  -d '
{
   "properties" : {
      "num" : {
         "store" : "yes",
         "type" : "integer"
      },
      "text" : {
         "store" : "yes",
         "type" : "string"
      }
   }
}
'
curl -XPOST 'http://127.0.0.2:9202/_flush?refresh=true' 

Merge bytebuffer and memory stores into a single memory store options

The bytebuffer store name is really bad. It basically exposes the user to the internals of Java on how a direct memory allocation (outside the JVM heap) is done.

Instead, there should be a single memory store, with the option to choose its "location", which can be either "heap" or "direct", with "direct" being the default.

This does mean that if someone was configuring to use the bytebuffer, things will break and they will need to change to memory type.

Search API: Set different boost for indices when searching across indices

Have the ability to set different boost values per index when searching across indices. This comes in handy for example, when each twitter user has an index, and his friends count more than the rest of the indices.

The parameter is a url parameter called indicesBoost, and for example: indicesBoost=indexName1:2,indexName2:3.1

Changing field type with create_mapping just hides the error

Easier to give an example than to explain:
- on a new cluster (ie no indices, no mappings)
- index a document with eg { foo: 123 } # sets type of foo to 'int'
- index a doc with { foo : "bar" } # throws an error
- create a mapping and set foo's type to 'string'
- index the doc with { foo: "bar"} # ok
- search for {term: { foo: 123}} # 1 hit
- search for {term: { foo: "bar"}} # no hits

So setting the mapping doesn't change the type of 'foo', it just hides the error message later on.

This seems inconsistent to me - it should either change the type of 'foo' going forward, or throw an error when you try to change the type with create_mapping.

Log file follows:

curl -XPUT http://127.0.0.2:9200/twitter/tweet/1 -d '{
   "foo" : 123
}
'
# {
#    "ok" : true,
#    "_index" : "twitter",
#    "_id" : "1",
#    "_type" : "tweet"
# }


curl -XPUT http://127.0.0.2:9200/twitter/tweet/2 -d '{
   "foo" : "bar"
}
'
# {
#    "debug" : {
#       "at" : {
#          "className" : "java.lang.Thread",
#          "methodName" : "run",
#          "fileName" : "Thread.java",
#          "lineNumber" : 619
#       },
#       "cause" : {
#          "at" : {
#             "className" : "java.lang.Thread",
#             "methodName" : "run",
#             "fileName" : "Thread.java",
#             "lineNumber" : 619
#          },
#          "message" : "Current token (VALUE_STRING) not numeric, c
# >         an not use numeric value accessors\n at [Source: {\n   
#       },
#       "message" : "Index[twitter] Shard[1] "
#    },
#    "error" : "Index[twitter] Shard[1] ; nested: Failed to parse; 
# >   nested: Current token (VALUE_STRING) not numeric, can not use
# }


curl -XPUT http://127.0.0.2:9200/_all/tweet/_mapping -d '{
   "properties" : {
      "foo" : {
         "type" : "string"
      }
   }
}
'
# {
#    "ok" : true
# }


curl -XPUT http://127.0.0.2:9200/twitter/tweet/2 -d '{
   "foo" : "bar"
}
'
# {
#    "ok" : true,
#    "_index" : "twitter",
#    "_id" : "2",
#    "_type" : "tweet"
# }


curl -XGET http://127.0.0.2:9200/twitter/tweet/_search -d '{
   "query" : {
      "term" : {
         "foo" : "123"
      }
   }
}
'
# {
#    "hits" : {
#       "hits" : [],
#       "total" : 0
#    },
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 5,
#       "total" : 5
#    }
# }


curl -XGET http://127.0.0.2:9200/twitter/tweet/_search -d '{
   "query" : {
      "term" : {
         "foo" : "bar"
      }
   }
}
'
# {
#    "hits" : {
#       "hits" : [],
#       "total" : 0
#    },
#    "_shards" : {
#       "failed" : 5,
#       "successful" : 0,
#       "total" : 5
#    }
# }

Terms with filters

For auto-suggest, it would be nice to be able to ask for (eg) all terms with prefix 'sch' that occur in the same document as 'arnold'

Any plans to add geospatial search?

It would be incredibly useful to be able to index documents with latitude/longitude positions and run searches that return results ordered by distance from a specific latitude/longitude point.

Optimize API: Add onlyExpungeDeletes, flush and refresh parameters

  • onlyExpungeDeletes: Performs lightweight optimization by only expunging pending deltes. Defaults to false.
  • flush: Should a flush be performed after the optimization. Defaults to false.
  • refresh: Should a refresh be performed after the optimization. Defaults to false.

NullPointerExceptions when flushing an index

Hiya

I'm running a test suite against a local server, started as:

./bin/elasticsearch -f

I'm getting NullPointerExceptions if I create an index, flush the indices, then delete the index, without sleeping after the flush. Shouldn't flush/refresh/optimize etc block until the action is complete?

Or is there some way of asking the cluster: are you ready now?

ta

clint
> ./bin/elasticsearch -f
[15:04:00,872][INFO ][server ] [Doe, John] {ElasticSearch/0.5.0/2010-02-18T13:42:47/dev}: Initializing ...
[15:04:02,663][INFO ][server ] [Doe, John] {ElasticSearch/0.5.0/2010-02-18T13:42:47/dev}: Initialized
[15:04:02,663][INFO ][server ] [Doe, John] {ElasticSearch/0.5.0/2010-02-18T13:42:47/dev}: Starting ...
[15:04:02,755][INFO ][transport ] [Doe, John] boundAddress [inet[/0.0.0.0:9300]], publishAddress [inet[/127.0.0.2:9300]]
[15:04:02,771][WARN ][jgroups.UDP ] send buffer of socket java.net.DatagramSocket@1faac07d was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
[15:04:02,771][WARN ][jgroups.UDP ] receive buffer of socket java.net.DatagramSocket@1faac07d was set to 20MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
[15:04:02,771][WARN ][jgroups.UDP ] send buffer of socket java.net.MulticastSocket@2259a735 was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
[15:04:02,771][WARN ][jgroups.UDP ] receive buffer of socket java.net.MulticastSocket@2259a735 was set to 25MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
[15:04:04,814][INFO ][cluster ] [Doe, John] New Master [Doe, John][getafix-2590][data][inet[/127.0.0.2:9300]]
[15:04:04,882][INFO ][discovery ] [Doe, John] elasticsearch/getafix-2590
[15:04:04,901][INFO ][http ] [Doe, John] boundAddress [inet[/0.0.0.0:9200]], publishAddress [inet[/127.0.0.2:9200]]
[15:04:05,140][INFO ][jmx ] [Doe, John] boundAddress [service:jmx:rmi:///jndi/rmi://:9400/jmxrmi], publishAddress [service:jmx:rmi:///jndi/rmi://127.0.0.2:9400/jmxrmi]
[15:04:05,140][INFO ][server ] [Doe, John] {ElasticSearch/0.5.0/2010-02-18T13:42:47/dev}: Started
[15:04:06,516][INFO ][cluster.metadata ] [Doe, John] Creating Index [es_test_2], shards [3]/[1]
[15:04:06,992][INFO ][cluster.metadata ] [Doe, John] Deleting index [es_test_2]
Exception in thread "elasticsearch[Doe, John]clusterService#updateTask-pool-6-thread-1" java.lang.NullPointerException
at org.elasticsearch.cluster.action.shard.ShardStateAction$4.execute(ShardStateAction.java:129)
at org.elasticsearch.cluster.DefaultClusterService$2.run(DefaultClusterService.java:161)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Exception in thread "elasticsearch[Doe, John]clusterService#updateTask-pool-6-thread-2" java.lang.NullPointerException
at org.elasticsearch.cluster.action.shard.ShardStateAction$4.execute(ShardStateAction.java:129)
at org.elasticsearch.cluster.DefaultClusterService$2.run(DefaultClusterService.java:161)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Exception in thread "elasticsearch[Doe, John]clusterService#updateTask-pool-6-thread-3" java.lang.NullPointerException
at org.elasticsearch.cluster.action.shard.ShardStateAction$4.execute(ShardStateAction.java:129)
at org.elasticsearch.cluster.DefaultClusterService$2.run(DefaultClusterService.java:161)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)

Test script:

curl -XGET http://127.0.0.1:9200/_cluster/nodes
# {
#    "clusterName" : "elasticsearch",
#    "nodes" : {
#       "getafix-2590" : {
#          "httpAddress" : "inet[/127.0.0.2:9200]",
#          "dataNode" : true,
#          "transportAddress" : "inet[getafix.traveljury.com/127.0.
# >         0.2:9300]",
#          "name" : "Doe, John"
#       }
#    }
# }


curl -XPUT http://127.0.0.2:9200/es_test_2/ -d '{
   "index" : {
      "numberOfReplicas" : 1,
      "numberOfShards" : 3
   }
}
'
# {
#    "ok" : true
# }


curl -XPOST http://127.0.0.2:9200/_flush
# {
#    "ok" : true,
#    "_shards" : {
#       "failed" : 6,
#       "successful" : 0,
#       "total" : 6
#    }
# }


curl -XDELETE http://127.0.0.2:9200/es_test_2/
# {
#    "ok" : true
# }


curl -XGET http://127.0.0.2:9200/es_test_2/_status
# {
#    "debug" : {
#       "at" : {
#          "className" : "java.lang.Thread",
#          "methodName" : "run",
#          "fileName" : "Thread.java",
#          "lineNumber" : 619
#       },
#       "message" : "Index[es_test_2] missing"
#    },
#    "error" : "Index[es_test_2] missing"
# }

query.sort should be an array, not an object

Because JSON doesn't take order into account, sort should be an array, not an object.

For instance:
sort : {
postDate : {reverse : true},
user : { },
score : { }
}

is the equivalent of:

sort : {
    score : { }
    user : { },
    postDate : {reverse : true},
}

Instead, perhaps this syntax:

sort : [
    "score",
    "user",
    { "postDate": { "reverse": true}} 
]

clint

Terms results differs between one node and multiple

hiya

When I run 'terms' queries against multiple nodes, I get incorrect results with these shard failures:

  "reason" : "BroadcastShardOperationFailedException[[es_test_2][2] ]; nested: RemoteTransportException[[Thumb, Tom][inet[/127.0.0.2:9302]][indices/terms/shard]]; nested: ArrayIndexOutOfBoundsException[23]; "

Start one server, then run this script. It pauses to allow you to stop the server, then to start 3 nodes, then it shows the diff between the two runs:

#!/bin/bash
curl -XPUT 'http://127.0.0.2:9200/es_test_1/'  -d '
{}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/'  -d '
{}'
curl -XPUT 'http://127.0.0.2:9200/_all/type_1/_mapping?ignoreDuplicates=false'  -d '
{"properties":{"num":{"store":"yes","type":"integer"},"text":{"store":"yes","type":"string"}}}'
curl -XPUT 'http://127.0.0.2:9200/_all/type_2/_mapping?ignoreDuplicates=false'  -d '
{"properties":{"num":{"store":"yes","type":"integer"},"text":{"store":"yes","type":"string"}}}'
curl -XPOST 'http://127.0.0.2:9200/_flush?refresh=true' 
sleep 2;
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/1'  -d '
{"num":2,"text":"foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/2'  -d '
{"num":3,"text":"foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/3'  -d '
{"num":4,"text":"foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/4'  -d '
{"num":5,"text":"foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/5'  -d '
{"num":6,"text":"foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/6'  -d '
{"num":7,"text":"foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/7'  -d '
{"num":8,"text":"foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/8'  -d '
{"num":9,"text":"foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/9'  -d '
{"num":10,"text":"foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/10'  -d '
{"num":11,"text":"foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/11'  -d '
{"num":12,"text":"foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/12'  -d '
{"num":13,"text":"foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/13'  -d '
{"num":14,"text":"bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/14'  -d '
{"num":15,"text":"bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/15'  -d '
{"num":16,"text":"bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/16'  -d '
{"num":17,"text":"bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/17'  -d '
{"num":18,"text":"baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/18'  -d '
{"num":19,"text":"baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/19'  -d '
{"num":20,"text":"baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/20'  -d '
{"num":21,"text":"baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/21'  -d '
{"num":22,"text":"bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/22'  -d '
{"num":23,"text":"bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/23'  -d '
{"num":24,"text":"bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/24'  -d '
{"num":25,"text":"bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/25'  -d '
{"num":26,"text":"foo baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/26'  -d '
{"num":27,"text":"foo baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/27'  -d '
{"num":28,"text":"foo baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/28'  -d '
{"num":29,"text":"foo baz"}'
curl -XPOST 'http://127.0.0.2:9200/_flush?refresh=true' 
sleep 2
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/30'  -d '
{
   "text" : "foo"
}
'

curl -XPOST 'http://127.0.0.2:9200/_flush?refresh=true' 
echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true'" > log_1
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true' >> log_1

echo "
curl -XGET 'http://127.0.0.2:9200/es_test_1/_terms?pretty=true&fields=text&toInclusive=true'" >> log_1
curl -XGET 'http://127.0.0.2:9200/es_test_1/_terms?pretty=true&fields=text&toInclusive=true' >> log_1

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&minFreq=17'" >> log_1 
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&minFreq=17' >> log_1 

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&maxFreq=16&fields=text&toInclusive=true'"  >> log_1
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&maxFreq=16&fields=text&toInclusive=true'  >> log_1

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&size=2&fields=text&toInclusive=true'"  >> log_1
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&size=2&fields=text&toInclusive=true'  >> log_1

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&sort=freq&fields=text&toInclusive=true'"  >> log_1
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&sort=freq&fields=text&toInclusive=true'  >> log_1

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&from=baz'"  >> log_1
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&from=baz'  >> log_1

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&to=baz&fields=text&toInclusive=true'"  >> log_1
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&to=baz&fields=text&toInclusive=true'  >> log_1

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&from=baz&fromInclusive=false'"  >> log_1
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&from=baz&fromInclusive=false'  >> log_1

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&to=baz&fields=text&toInclusive=false'"  >> log_1
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&to=baz&fields=text&toInclusive=false'  >> log_1

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&prefix=ba'"  >> log_1
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&prefix=ba'  >> log_1

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&regexp=foo|baz'"  >> log_1
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&regexp=foo|baz'  >> log_1
  #########################################################################


echo "

Now kill the current server, and start 3 nodes, then press Enter

"

read

curl -XPUT 'http://127.0.0.2:9200/es_test_1/'  -d '
{}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/'  -d '
{}'
curl -XPUT 'http://127.0.0.2:9200/_all/type_1/_mapping?ignoreDuplicates=false'  -d '
{"properties":{"num":{"store":"yes","type":"integer"},"text":{"store":"yes","type":"string"}}}'
curl -XPUT 'http://127.0.0.2:9200/_all/type_2/_mapping?ignoreDuplicates=false'  -d '
{"properties":{"num":{"store":"yes","type":"integer"},"text":{"store":"yes","type":"string"}}}'
curl -XPOST 'http://127.0.0.2:9200/_flush?refresh=true' 
sleep 2;
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/1'  -d '
{"num":2,"text":"foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/2'  -d '
{"num":3,"text":"foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/3'  -d '
{"num":4,"text":"foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/4'  -d '
{"num":5,"text":"foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/5'  -d '
{"num":6,"text":"foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/6'  -d '
{"num":7,"text":"foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/7'  -d '
{"num":8,"text":"foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/8'  -d '
{"num":9,"text":"foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/9'  -d '
{"num":10,"text":"foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/10'  -d '
{"num":11,"text":"foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/11'  -d '
{"num":12,"text":"foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/12'  -d '
{"num":13,"text":"foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/13'  -d '
{"num":14,"text":"bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/14'  -d '
{"num":15,"text":"bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/15'  -d '
{"num":16,"text":"bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/16'  -d '
{"num":17,"text":"bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/17'  -d '
{"num":18,"text":"baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/18'  -d '
{"num":19,"text":"baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/19'  -d '
{"num":20,"text":"baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/20'  -d '
{"num":21,"text":"baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/21'  -d '
{"num":22,"text":"bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/22'  -d '
{"num":23,"text":"bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/23'  -d '
{"num":24,"text":"bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/24'  -d '
{"num":25,"text":"bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/25'  -d '
{"num":26,"text":"foo baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_2/26'  -d '
{"num":27,"text":"foo baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/27'  -d '
{"num":28,"text":"foo baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/28'  -d '
{"num":29,"text":"foo baz"}'
curl -XPOST 'http://127.0.0.2:9200/_flush?refresh=true' 
sleep 2
curl -XPUT 'http://127.0.0.2:9200/es_test_1/type_1/30'  -d '
{
   "text" : "foo"
}
'

curl -XPOST 'http://127.0.0.2:9200/_flush?refresh=true' 
echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true'" > log_2
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true' >> log_2

echo "
curl -XGET 'http://127.0.0.2:9200/es_test_1/_terms?pretty=true&fields=text&toInclusive=true'" >> log_2
curl -XGET 'http://127.0.0.2:9200/es_test_1/_terms?pretty=true&fields=text&toInclusive=true' >> log_2

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&minFreq=17'" >> log_2 
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&minFreq=17' >> log_2 

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&maxFreq=16&fields=text&toInclusive=true'"  >> log_2
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&maxFreq=16&fields=text&toInclusive=true'  >> log_2

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&size=2&fields=text&toInclusive=true'"  >> log_2
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&size=2&fields=text&toInclusive=true'  >> log_2

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&sort=freq&fields=text&toInclusive=true'"  >> log_2
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&sort=freq&fields=text&toInclusive=true'  >> log_2

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&from=baz'"  >> log_2
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&from=baz'  >> log_2

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&to=baz&fields=text&toInclusive=true'"  >> log_2
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&to=baz&fields=text&toInclusive=true'  >> log_2

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&from=baz&fromInclusive=false'"  >> log_2
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&from=baz&fromInclusive=false'  >> log_2

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&to=baz&fields=text&toInclusive=false'"  >> log_2
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&to=baz&fields=text&toInclusive=false'  >> log_2

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&prefix=ba'"  >> log_2
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&prefix=ba'  >> log_2

echo "
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&regexp=foo|baz'"  >> log_2
curl -XGET 'http://127.0.0.2:9200/_terms?pretty=true&fields=text&toInclusive=true&regexp=foo|baz'  >> log_2

echo "




Showing diff:
"

diff -y --left-column log_1 log_2

Accept 1 / 0 as true/false

Hiya

When you have boolean parameters, eg { "explain": true }, please could you accept any truthy value (eg 1) as for dynamic languages, the user has to go to great lengths to force a boolean true in JSON

thanks

clint

Terms API: Allow to get terms for one or more field

Getting terms (from one or more indices) and their document frequency (the number of time those terms appear in a document) is very handy. For example, implementing tag clouds, or providing basic auto suggest search box.

There should be several options for this API, including sorting by term (lex) or doc freq, bounding size, from/to (inclusive or not), min/max freq, prefix and regexp filtering.

The rest api should be: /{index}/_terms

Exception in clusters with embedded and standalone nodes

When I run a cluster made up of embedded and standalone nodes, former ones seems not to be able to connect to latter ones, and I get the following log messages on embedded nodes:

OOB-1,elasticsearch,caffeine.local-16363 -
jgroups.pbcast.NAKACK - caffeine.local-16363: dropped message from
caffeine.local-17690 (not in xmit_table), keys are
[caffeine.local-16363], view=[caffeine.local-16363|0]
[caffeine.local-16363]

The problem seems to be related to JGroups which, on IPv6-enabled machines, prefers IPv6 addresses over IPv4 ones, but embedded instances aren't able to guess a proper IPv6 address.

That can be fixed by:
Specifying java.net.preferIPv4Stack=true on the embedded node, forcing the use of IPv4.
or
Specifying java.net.preferIPv4Stack=false and java.net.preferIPv6Stack=false on the standalone node, apparently forcing JGroups to take the proper default.

flush_index returns both success and failure

while running the default server, just started with

./bin/elasticsearch -f

I create an index, then flush it, and it returns:

'{
   "ok" : true,
   "_shards" : {
      "failed" : 5,
      "successful" : 5,
      "total" : 10
   }
}
'

Why do 5 fail?

Query DSL: Terms Filter

Support terms filter which allows to configure more than one term for a specific field. For example:

{
    filteredQuery : {
        query : {
            term : { "name.first" : "shay" }
        },
        filter : {
            terms : {
                "name.last" : ["banon", "kimchy"]
            }
        }
    }
}

Boolean Type: Support also cases when a number/string value are passed

Even though there is an explicit boolean type in JSON, support also cases when a number or string are passed. 0 meaning false, any other meaning true. "false" string meaning false, any other string meaning true.

Note, the boolean type will have to be explicitly defined, otherwise, a number will be defined as a number, and a string as a string.

Any plans to support faceting?

One of the most useful features of Solr is faced searching - is that likely to be added to elasticsearch at any point?

Sorting on a field explicitly mapped as an integer fails when not all types mapped as well

  • create two indices 'es_test' and 'es_test_2'
  • create a mapping for 'type_1' which includes { num: { type: "integer"}
  • store documents as /es_test|es_test_2/type_1|type_2 with integer values in num
  • search on all indices and types, sorting by num
  • server hangs

Test script:

curl -XGET 'http://127.0.0.1:9200/_cluster/nodes' 
curl -XDELETE 'http://127.0.0.2:9200/es_test/' 
curl -XDELETE 'http://127.0.0.2:9200/es_test_2/' 
curl -XPOST 'http://127.0.0.2:9200/_flush?refresh=true' 
curl -XPUT 'http://127.0.0.2:9200/es_test/' 
curl -XPUT 'http://127.0.0.2:9200/es_test_2/'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/_mapping'  -d '
{
   "properties" : {
      "num" : {
         "type" : "integer"
      },
      "text" : {
         "type" : "string"
      }
   }
}
'
curl -XPOST 'http://127.0.0.2:9200/_flush?refresh=true' 
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/1?opType=create'  -d '{
   "num" : 2,   "text" : "foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/2?opType=create'  -d '{
   "num" : 3,   "text" : "foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/3?opType=create'  -d '{
   "num" : 4,   "text" : "foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/4?opType=create'  -d '{
   "num" : 5,   "text" : "foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/5?opType=create'  -d '{
   "num" : 6,   "text" : "foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/6?opType=create'  -d '{
   "num" : 7,   "text" : "foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/7?opType=create'  -d '{
   "num" : 8,   "text" : "foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/8?opType=create'  -d '{
   "num" : 9,   "text" : "foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/9?opType=create'  -d '{
   "num" : 10,   "text" : "foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/10?opType=create'  -d '{
   "num" : 11,   "text" : "foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/11?opType=create'  -d '{
   "num" : 12,   "text" : "foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/12?opType=create'  -d '{
   "num" : 13,   "text" : "foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/13?opType=create'  -d '{
   "num" : 14,   "text" : "bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/14?opType=create'  -d '{
   "num" : 15,   "text" : "bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/15?opType=create'  -d '{
   "num" : 16,   "text" : "bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/16?opType=create'  -d '{
   "num" : 17,   "text" : "bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/17?opType=create'  -d '{
   "num" : 18,   "text" : "baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/18?opType=create'  -d '{
   "num" : 19,   "text" : "baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/19?opType=create'  -d '{
   "num" : 20,   "text" : "baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/20?opType=create'  -d '{
   "num" : 21,   "text" : "baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/21?opType=create'  -d '{
   "num" : 22,   "text" : "bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/22?opType=create'  -d '{
   "num" : 23,   "text" : "bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/23?opType=create'  -d '{
   "num" : 24,   "text" : "bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/24?opType=create'  -d '{
   "num" : 25,   "text" : "bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/25?opType=create'  -d '{
   "num" : 26,   "text" : "foo baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/26?opType=create'  -d '{
   "num" : 27,   "text" : "foo baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/27?opType=create'  -d '{
   "num" : 28,   "text" : "foo baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/28?opType=create'  -d '{
   "num" : 29,   "text" : "foo baz"}'


curl -XPOST 'http://127.0.0.2:9200/_flush?refresh=true' 


curl -XGET 'http://127.0.0.2:9200/_all/_search'  -d '
{
   "sort" : {
      "num" : {}
   },
   "query" : {
      "matchAll" : {}
   }
}
'

Only storing one mapping

Hiya

Something weird is going on with put_mapping, eg:

  • index /foo/one/ -d { "xxx": "text"}
  • mappings for index 'foo' shows the correct mapping for 'one'

  • index /foo/two -d {"yyy": "text"}
  • mappings for index 'foo' shows the mapping for 'one' but not 'two'

or

  • index /foo/one/ -d { "xxx": "text"}
  • mappings for index 'foo' shows the correct mapping for 'one'

  • put_mapping /foo/two -d { properties: {"yyy": { type: "string"}}}
  • mappings for index 'foo' shows the mapping for 'one' but not 'two'

or

  • put_mapping /foo/one -d { properties: {"xxx": { type: "string"}}}
  • mappings for index 'foo' shows the correct mapping for 'one'

  • put_mapping /foo/two -d { properties: {"yyy": { type: "string"}}}
  • mappings for index 'foo' shows the mapping for 'two' but not for 'one'

Example below:

curl -XPOST 'http://127.0.0.2:9200/foo/one'  -d '
{
   "xxx" : "text"
}
'
# {
#    "ok" : true,
#    "_index" : "foo",
#    "_id" : "340e1d15-3dfd-4a8e-95ec-1c29fb8e9182",
#    "_type" : "one"
# }

Server log:
-----------
    Index [foo]: Update mapping [one] (dynamic) with source [{
      "one" : {
        "type" : "object",
        "dynamic" : true,
        "enabled" : true,
        "pathType" : "full",
        "dateFormats" : [ "dateOptionalTime" ],
        "boostField" : {
          "name" : "_boost"
        },
        "properties" : {
          "xxx" : {
            "type" : "string",
            "indexName" : "xxx",
            "index" : "analyzed",
            "store" : "no",
            "termVector" : "no",
            "boost" : 1.0,
            "omitNorms" : false,
            "omitTermFreqAndPositions" : false
          }
        }
      }
    }]


Cluster-state: indices.foo.mappings.mapping.value:
--------------------------------------------------
    '{
      "one" : {
        "type" : "object",
        "dynamic" : true,
        "enabled" : true,
        "pathType" : "full",
        "dateFormats" : [ "dateOptionalTime" ],
        "boostField" : {
          "name" : "_boost"
        },
        "properties" : {
          "xxx" : {
            "type" : "string",
            "indexName" : "xxx",
            "index" : "analyzed",
            "store" : "no",
            "termVector" : "no",
            "boost" : 1.0,
            "omitNorms" : false,
            "omitTermFreqAndPositions" : false
          }
        }
      }
    }'

curl -XPOST 'http://127.0.0.2:9200/foo/two'  -d '
{
   "yyy" : "text"
}
'
# {
#    "ok" : true,
#    "_index" : "foo",
#    "_id" : "c57b2421-943e-4623-be80-8168211fca5d",
#    "_type" : "two"
# }

Server log:
-----------
    Update mapping [two] (dynamic) with source [{
      "two" : {
        "type" : "object",
        "dynamic" : true,
        "enabled" : true,
        "pathType" : "full",
        "dateFormats" : [ "dateOptionalTime" ],
        "boostField" : {
          "name" : "_boost"
        },
        "properties" : {
          "yyy" : {
            "type" : "string",
            "indexName" : "yyy",
            "index" : "analyzed",
            "store" : "no",
            "termVector" : "no",
            "boost" : 1.0,
            "omitNorms" : false,
            "omitTermFreqAndPositions" : false
          }
        }
      }
    }]

Cluster-state: indices.foo.mappings.mapping.value:
--------------------------------------------------
    '{
      "one" : {
        "type" : "object",
        "dynamic" : true,
        "enabled" : true,
        "pathType" : "full",
        "dateFormats" : [ "dateOptionalTime" ],
        "boostField" : {
          "name" : "_boost"
        },
        "properties" : {
          "xxx" : {
            "type" : "string",
            "indexName" : "xxx",
            "index" : "analyzed",
            "store" : "no",
            "termVector" : "no",
            "boost" : 1.0,
            "omitNorms" : false,
            "omitTermFreqAndPositions" : false
          }
        }
      }
    }'

Query DSL: moreLikeThis & moreLikeThisField

Add support for moreLikeThis (MLT) queries. More like this allows to find documents that "look" like the provided text when matches against one or more field.

The moreLikeThis query looks as follows:

{
    moreLikeThis : {
        fields : ["name.first", "name.last"],
        likeText : "something",
        minTermFrequency : 1,
        maxQueryTerms : 12
    }
}

The moreLikeThisField looks as follows:

{
    moreLikeThisField : {
        "name.first" : {
            likeText : "something",
            minTermFrequency : 1,
            maxQueryTerms : 12
        }
    }
}

There are much more parameters that are supported, will be documented in the ref docs for the next release. They basically follow the MLT support in Lucene.

The difference between moreLikeThisField and moreLikeThis is that in moreLikeThisField only a single field can be provided, and it will supported "typed" fields (will automatically filter based on the type).

Mapping not working

Hiya

Following the examples in your docs, create-mapping does not seem to work, eg:

curl -XPUT http://localhost:9200/twitter/tweet -d '
{
    tweet : {
        properties : {
            message : {type : "string", store : "yes"}
        }
    }
}
'

No handler found for uri [/twitter/tweet] and method [PUT]

I tried creating the index first, but same thing.

Also, the JSON format for specifying the mapping type to use when indexing a document is ambiguous, eg:

curl -XPUT http://localhost:9200/twitter/tweet/1 -d \
'
{
     tweet : {
        user : "kimchy",
        postDate : "2009-11-15T14:12:12",
        message : "trying out Elastic Search"
    }
  }
  '

Does that mean that the document has mapping type 'tweet', or that there is no mapping type specified, and it has a single top level key called 'tweet'.

And one thing i'm not sure about? Is a mapping the same thing as a type? So you would never have type 'foo' and mapping 'bar'?

thanks

Clint

Query DSL: Bool query/filter to be valid JSON

Currently, the bool query is not a valid Javassctipt (still valid JSON though...) since indicating two must clauses uses the same field name for a JSON object. The old way should still be supported, but, we should also allow for something like this:

Currently, the bool query is not a valid Javassctipt (still valid JSON though...) since indicating two must clauses uses the same field name for a JSON object. The old way should still be supported, but, we should also allow for something like this:

{
    bool : {
        must : [
            {
                queryString : {
                    defaultField : "content",
                    query : "test1"
                }
            },
            {
                queryString : {
                    defaultField : "content",
                    query : "test4"
                }
            }
        ],
        mustNot: {
            queryString : {
                defaultField : "content",
                query : "test2"
            }
        },
        should: {
            queryString : {
                defaultField : "content",
                query : "test3"
            }
        }
    }
}

New nodes not joining the cluster properly

I start one node, insert various documents, run some queries - I get the correct results.

I start a new node, and wait for it to settle (even running optimize/flush/refresh)

When rerunning the same queries, I get different totals and fewer hits returned, eg instead of the default 10, I may get 4 or 5

Killing the other nodes and rerunning the queries returns the correct results

ta

clint

Facet query crashes the cluster

Hiya

In my test script, I create two indices, then add 28 documents, then try searching on those.

When I get to the final facets query, the cluster never responds, and then remains unresponsive to all further queries. One of the nodes balloons to 631MB of resident memory - I presume this is some sort of max allowed by java.

Then when closing down the nodes, two close down fine, and the third throws exceptions like the ones listed below.

Test script: (see bottom for facets query)

curl -XGET 'http://127.0.0.1:9200/_cluster/nodes' 
# {
#    "clusterName" : "elasticsearch",
#    "nodes" : {
#       "getafix-10912" : {
#          "httpAddress" : "inet[/127.0.0.2:9200]",
#          "dataNode" : true,
#          "transportAddress" : "inet[getafix.traveljury.com/127.0.
# >          0.2:9300]",
#          "name" : "Ameridroid"
#       },
#       "getafix-62342" : {
#          "httpAddress" : "inet[/127.0.0.2:9201]",
#          "dataNode" : true,
#          "transportAddress" : "inet[getafix.traveljury.com/127.0.
# >          0.2:9302]",
#          "name" : "Super Sabre"
#       },
#       "getafix-27084" : {
#          "httpAddress" : "inet[/127.0.0.2:9202]",
#          "dataNode" : true,
#          "transportAddress" : "inet[getafix.traveljury.com/127.0.
# >          0.2:9301]",
#          "name" : "Shadow Slasher"
#       }
#    }
# }

curl -XPUT 'http://127.0.0.2:9201/es_test/'  -d '
{}
'
# {
#    "ok" : true
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/'  -d '
{}
'
# {
#    "ok" : true
# }


curl -XPOST 'http://127.0.0.2:9201/_flush?refresh=true' 
# {
#    "ok" : true,
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 20,
#       "total" : 20
#    }
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_1/1?opType=create'  -d '
{
   "num" : 2,
   "text" : "foo"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "1",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_2/2?opType=create'  -d '
{
   "num" : 3,
   "text" : "foo"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "2",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_1/3?opType=create'  -d '
{
   "num" : 4,
   "text" : "foo"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "3",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_2/4?opType=create'  -d '
{
   "num" : 5,
   "text" : "foo"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "4",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_1/5?opType=create'  -d '
{
   "num" : 6,
   "text" : "foo bar"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "5",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_2/6?opType=create'  -d '
{
   "num" : 7,
   "text" : "foo bar"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "6",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_1/7?opType=create'  -d '
{
   "num" : 8,
   "text" : "foo bar"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "7",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_2/8?opType=create'  -d '
{
   "num" : 9,
   "text" : "foo bar"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "8",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_1/9?opType=create'  -d '
{
   "num" : 10,
   "text" : "foo bar baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "9",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_2/10?opType=create'  -d '
{
   "num" : 11,
   "text" : "foo bar baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "10",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_1/11?opType=create'  -d '
{
   "num" : 12,
   "text" : "foo bar baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "11",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_2/12?opType=create'  -d '
{
   "num" : 13,
   "text" : "foo bar baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "12",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_1/13?opType=create'  -d '
{
   "num" : 14,
   "text" : "bar baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "13",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_2/14?opType=create'  -d '
{
   "num" : 15,
   "text" : "bar baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "14",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_1/15?opType=create'  -d '
{
   "num" : 16,
   "text" : "bar baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "15",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_2/16?opType=create'  -d '
{
   "num" : 17,
   "text" : "bar baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "16",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_1/17?opType=create'  -d '
{
   "num" : 18,
   "text" : "baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "17",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_2/18?opType=create'  -d '
{
   "num" : 19,
   "text" : "baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "18",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_1/19?opType=create'  -d '
{
   "num" : 20,
   "text" : "baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "19",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_2/20?opType=create'  -d '
{
   "num" : 21,
   "text" : "baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "20",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_1/21?opType=create'  -d '
{
   "num" : 22,
   "text" : "bar"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "21",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_2/22?opType=create'  -d '
{
   "num" : 23,
   "text" : "bar"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "22",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_1/23?opType=create'  -d '
{
   "num" : 24,
   "text" : "bar"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "23",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_2/24?opType=create'  -d '
{
   "num" : 25,
   "text" : "bar"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "24",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_1/25?opType=create'  -d '
{
   "num" : 26,
   "text" : "foo baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "25",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test/type_2/26?opType=create'  -d '
{
   "num" : 27,
   "text" : "foo baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test",
#    "_id" : "26",
#    "_type" : "type_2"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_1/27?opType=create'  -d '
{
   "num" : 28,
   "text" : "foo baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "27",
#    "_type" : "type_1"
# }


curl -XPUT 'http://127.0.0.2:9201/es_test_2/type_2/28?opType=create'  -d '
{
   "num" : 29,
   "text" : "foo baz"
}
'
# {
#    "ok" : true,
#    "_index" : "es_test_2",
#    "_id" : "28",
#    "_type" : "type_2"
# }


curl -XPOST 'http://127.0.0.2:9201/_flush?refresh=true' 
# {
#    "ok" : true,
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 20,
#       "total" : 20
#    }
# }


curl -XGET 'http://127.0.0.2:9201/_all/_search'  -d '
{
   "query" : {
      "matchAll" : {}
   }
}
'
# {
#    "hits" : {
#       "hits" : [
#          {
#             "_source" : {
#                "num" : 4,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "3",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 5,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "4",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 9,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "8",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 21,
#                "text" : "baz"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "20",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 3,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "2",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 7,
#                "text" : "foo bar"
#             },
#             "_index" : "es_test",
#             "_id" : "6",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 2,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "1",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 11,
#                "text" : "foo bar baz"
#             },
#             "_index" : "es_test",
#             "_id" : "10",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 22,
#                "text" : "bar"
#             },
#             "_index" : "es_test",
#             "_id" : "21",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 10,
#                "text" : "foo bar baz"
#             },
#             "_index" : "es_test",
#             "_id" : "9",
#             "_type" : "type_1"
#          }
#       ],
#       "total" : 28
#    },
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 10,
#       "total" : 10
#    }
# }


curl -XGET 'http://127.0.0.2:9201/_all/_search'  -d '
{
   "query" : {
      "matchAll" : {}
   },
   "size" : 100
}
'
# {
#    "hits" : {
#       "hits" : [
#          {
#             "_source" : {
#                "num" : 5,
#                "text" : "foo"
#             },
#             "_index" : "es_test_2",
#             "_id" : "4",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 7,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "6",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 3,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "2",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 11,
#                "text" : "foo bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "10",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 4,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "3",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 9,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "8",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 21,
#                "text" : "baz"
#             },
#             "_index" : "es_test_2",
#             "_id" : "20",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 2,
#                "text" : "foo"
#             },
#             "_index" : "es_test",
#             "_id" : "1",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 6,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "5",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 8,
#                "text" : "foo bar"
#             },
#             "_index" : "es_test_2",
#             "_id" : "7",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 12,
#                "text" : "foo bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "11",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 10,
#                "text" : "foo bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "9",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 20,
#                "text" : "baz"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "19",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 22,
#                "text" : "bar"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "21",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 16,
#                "text" : "bar baz"
#             },
#             "_index" : "es_test_2",
#             "_id" : "15",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 23,
#                "text" : "bar"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "22",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 14,
#                "text" : "bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "13",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 13,
#                "text" : "foo bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "12",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 18,
#                "text" : "baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "17",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 24,
#                "text" : "bar"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "23",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 17,
#                "text" : "bar baz"
#             },
#             "_index" : "es_test_2",
#             "_id" : "16",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 15,
#                "text" : "bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "14",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 19,
#                "text" : "baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "18",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 25,
#                "text" : "bar"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "24",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 26,
#                "text" : "foo baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "25",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 28,
#                "text" : "foo baz"
#             },
#             "_index" : "es_test_2",
#             "_id" : "27",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 27,
#                "text" : "foo baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "26",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 29,
#                "text" : "foo baz"
#             },
#             "_index" : "es_test_2",
#             "_id" : "28",
#             "_type" : "type_2"
#          }
#       ],
#       "total" : 28
#    },
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 10,
#       "total" : 10
#    }
# }


curl -XGET 'http://127.0.0.2:9201/_all/_search?searchType=query_then_fetch'  -d '
{
   "query" : {
      "matchAll" : {}
   }
}
'
# {
#    "hits" : {
#       "hits" : [
#          {
#             "_source" : {
#                "num" : 4,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "3",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 5,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "4",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 9,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "8",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 21,
#                "text" : "baz"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "20",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 3,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "2",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 7,
#                "text" : "foo bar"
#             },
#             "_index" : "es_test",
#             "_id" : "6",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 2,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "1",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 11,
#                "text" : "foo bar baz"
#             },
#             "_index" : "es_test",
#             "_id" : "10",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 22,
#                "text" : "bar"
#             },
#             "_index" : "es_test",
#             "_id" : "21",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 10,
#                "text" : "foo bar baz"
#             },
#             "_index" : "es_test",
#             "_id" : "9",
#             "_type" : "type_1"
#          }
#       ],
#       "total" : 28
#    },
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 10,
#       "total" : 10
#    }
# }


curl -XGET 'http://127.0.0.2:9201/_all/_search?searchType=query_and_fetch'  -d '
{
   "query" : {
      "matchAll" : {}
   }
}
'
# {
#    "hits" : {
#       "hits" : [
#          {
#             "_source" : {
#                "num" : 5,
#                "text" : "foo"
#             },
#             "_index" : "es_test_2",
#             "_id" : "4",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 7,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "6",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 3,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "2",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 11,
#                "text" : "foo bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "10",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 4,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "3",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 9,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "8",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 21,
#                "text" : "baz"
#             },
#             "_index" : "es_test_2",
#             "_id" : "20",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 2,
#                "text" : "foo"
#             },
#             "_index" : "es_test",
#             "_id" : "1",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 6,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "5",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 8,
#                "text" : "foo bar"
#             },
#             "_index" : "es_test_2",
#             "_id" : "7",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 12,
#                "text" : "foo bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "11",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 10,
#                "text" : "foo bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "9",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 20,
#                "text" : "baz"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "19",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 22,
#                "text" : "bar"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "21",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 16,
#                "text" : "bar baz"
#             },
#             "_index" : "es_test_2",
#             "_id" : "15",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 23,
#                "text" : "bar"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "22",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 14,
#                "text" : "bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "13",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 13,
#                "text" : "foo bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "12",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 18,
#                "text" : "baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "17",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 24,
#                "text" : "bar"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "23",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 17,
#                "text" : "bar baz"
#             },
#             "_index" : "es_test_2",
#             "_id" : "16",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 15,
#                "text" : "bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "14",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 19,
#                "text" : "baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "18",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 25,
#                "text" : "bar"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "24",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 26,
#                "text" : "foo baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "25",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 28,
#                "text" : "foo baz"
#             },
#             "_index" : "es_test_2",
#             "_id" : "27",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 27,
#                "text" : "foo baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "26",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 29,
#                "text" : "foo baz"
#             },
#             "_index" : "es_test_2",
#             "_id" : "28",
#             "_type" : "type_2"
#          }
#       ],
#       "total" : 28
#    },
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 10,
#       "total" : 10
#    }
# }


curl -XGET 'http://127.0.0.2:9201/_all/_search'  -d '
{
   "query" : {
      "term" : {
         "text" : "foo"
      }
   }
}
'
# {
#    "hits" : {
#       "hits" : [
#          {
#             "_source" : {
#                "num" : 3,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "2",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 5,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "4",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 4,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "3",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 6,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "5",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 9,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "8",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 7,
#                "text" : "foo bar"
#             },
#             "_index" : "es_test",
#             "_id" : "6",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 8,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "7",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 26,
#                "text" : "foo baz"
#             },
#             "_index" : "es_test",
#             "_id" : "25",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 28,
#                "text" : "foo baz"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "27",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 27,
#                "text" : "foo baz"
#             },
#             "_index" : "es_test",
#             "_id" : "26",
#             "_type" : "type_2"
#          }
#       ],
#       "total" : 16
#    },
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 10,
#       "total" : 10
#    }
# }


curl -XGET 'http://127.0.0.2:9201/_all/_search'  -d '
{
   "query" : {
      "queryString" : {
         "defaultField" : "text",
         "query" : "foo OR bar"
      }
   }
}
'
# {
#    "hits" : {
#       "hits" : [
#          {
#             "_source" : {
#                "num" : 6,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "5",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 8,
#                "text" : "foo bar"
#             },
#             "_index" : "es_test_2",
#             "_id" : "7",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 7,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "6",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 9,
#                "text" : "foo bar"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "8",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 10,
#                "text" : "foo bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "9",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 11,
#                "text" : "foo bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "10",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 12,
#                "text" : "foo bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "11",
#             "_type" : "type_1"
#          },
#          {
#             "_source" : {
#                "num" : 13,
#                "text" : "foo bar baz"
#             },
#             "fields" : {},
#             "_index" : "es_test_2",
#             "_id" : "12",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 3,
#                "text" : "foo"
#             },
#             "fields" : {},
#             "_index" : "es_test",
#             "_id" : "2",
#             "_type" : "type_2"
#          },
#          {
#             "_source" : {
#                "num" : 5,
#                "text" : "foo"
#             },
#             "_index" : "es_test_2",
#             "_id" : "4",
#             "_type" : "type_2"
#          }
#       ],
#       "total" : 24
#    },
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 10,
#       "total" : 10
#    }
# }


curl -XGET 'http://127.0.0.2:9201/_all/_search'  -d '
{
   "query" : {
      "queryString" : {
         "defaultField" : "text",
         "query" : "foo OR bar"
      }
   },
   "facets" : {
      "barFacet" : {
         "query" : {
            "term" : {
               "text" : "bar"
            }
         }
      },
      "bazFacet" : {
         "query" : {
            "term" : {
               "text" : "baz"
            }
         }
      }
   }
}
'
#500 Server closed connection without sending any data back

Node errors:

[18:51:56,987][INFO ][server                   ] [Super Sabre] {ElasticSearch/0.5.0/2010-02-19T12:32:15/dev}: Closing ...
[18:52:06,995][INFO ][server                   ] [Super Sabre] {ElasticSearch/0.5.0/2010-02-19T12:32:15/dev}: Closed
[18:52:06,998][WARN ][indices.cluster          ] [Super Sabre] Failed to start shard for index [es_test_2] and shard id [0]
org.elasticsearch.index.shard.recovery.RecoveryFailedException: Index Shard [es_test_2][0]: Recovery failed from [Shadow Slasher][getafix-27084][data][inet[getafix.traveljury.com/127.0.0.2:9301]] into [Super Sabre][getafix-62342][data][inet[getafix.traveljury.com/127.0.0.2:9302]]
    at org.elasticsearch.index.shard.recovery.RecoveryAction.startRecovery(RecoveryAction.java:154)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService$3.run(IndicesClusterStateService.java:325)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
Caused by: org.elasticsearch.ElasticSearchInterruptedException
    at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:97)
    at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:34)
    at org.elasticsearch.index.shard.recovery.RecoveryAction.startRecovery(RecoveryAction.java:124)
    ... 4 more
[18:52:06,999][WARN ][indices.cluster          ] [Super Sabre] Failed to start shard for index [es_test] and shard id [2]
org.elasticsearch.index.shard.recovery.RecoveryFailedException: Index Shard [es_test][2]: Recovery failed from [Shadow Slasher][getafix-27084][data][inet[getafix.traveljury.com/127.0.0.2:9301]] into [Super Sabre][getafix-62342][data][inet[getafix.traveljury.com/127.0.0.2:9302]]
    at org.elasticsearch.index.shard.recovery.RecoveryAction.startRecovery(RecoveryAction.java:154)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService$3.run(IndicesClusterStateService.java:325)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
Caused by: org.elasticsearch.ElasticSearchInterruptedException
    at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:97)
    at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:34)
    at org.elasticsearch.index.shard.recovery.RecoveryAction.startRecovery(RecoveryAction.java:124)
    ... 4 more
[18:52:06,998][WARN ][indices.cluster          ] [Super Sabre] Failed to start shard for index [es_test_2] and shard id [3]
org.elasticsearch.index.shard.recovery.RecoveryFailedException: Index Shard [es_test_2][3]: Recovery failed from [Shadow Slasher][getafix-27084][data][inet[getafix.traveljury.com/127.0.0.2:9301]] into [Super Sabre][getafix-62342][data][inet[getafix.traveljury.com/127.0.0.2:9302]]
    at org.elasticsearch.index.shard.recovery.RecoveryAction.startRecovery(RecoveryAction.java:154)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService$3.run(IndicesClusterStateService.java:325)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
Caused by: org.elasticsearch.ElasticSearchInterruptedException
    at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:97)
    at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:34)
    at org.elasticsearch.index.shard.recovery.RecoveryAction.startRecovery(RecoveryAction.java:124)
    ... 4 more
[18:52:07,003][WARN ][cluster.action.shard     ] [Super Sabre] Sending failed shard for [es_test_2][3], Node[getafix-62342], [B], S[INITIALIZING]
[18:52:07,001][WARN ][cluster.action.shard     ] [Super Sabre] Sending failed shard for [es_test_2][0], Node[getafix-62342], [B], S[INITIALIZING]
[18:52:07,004][WARN ][indices.cluster          ] [Super Sabre] Failed to mark shard as failed after a failed start for index [es_test_2] and shard id [0]
org.elasticsearch.index.shard.recovery.RecoveryFailedException: Index Shard [es_test_2][0]: Recovery failed from [Shadow Slasher][getafix-27084][data][inet[getafix.traveljury.com/127.0.0.2:9301]] into [Super Sabre][getafix-62342][data][inet[getafix.traveljury.com/127.0.0.2:9302]]
    at org.elasticsearch.index.shard.recovery.RecoveryAction.startRecovery(RecoveryAction.java:154)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService$3.run(IndicesClusterStateService.java:325)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
Caused by: org.elasticsearch.ElasticSearchInterruptedException
    at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:97)
    at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:34)
    at org.elasticsearch.index.shard.recovery.RecoveryAction.startRecovery(RecoveryAction.java:124)
    ... 4 more
[18:52:07,002][WARN ][cluster.action.shard     ] [Super Sabre] Sending failed shard for [es_test][2], Node[getafix-62342], [B], S[INITIALIZING]
[18:52:07,005][WARN ][indices.cluster          ] [Super Sabre] Failed to mark shard as failed after a failed start for index [es_test] and shard id [2]
org.elasticsearch.index.shard.recovery.RecoveryFailedException: Index Shard [es_test][2]: Recovery failed from [Shadow Slasher][getafix-27084][data][inet[getafix.traveljury.com/127.0.0.2:9301]] into [Super Sabre][getafix-62342][data][inet[getafix.traveljury.com/127.0.0.2:9302]]
    at org.elasticsearch.index.shard.recovery.RecoveryAction.startRecovery(RecoveryAction.java:154)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService$3.run(IndicesClusterStateService.java:325)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
Caused by: org.elasticsearch.ElasticSearchInterruptedException
    at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:97)
    at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:34)
    at org.elasticsearch.index.shard.recovery.RecoveryAction.startRecovery(RecoveryAction.java:124)
    ... 4 more
[18:52:07,004][WARN ][indices.cluster          ] [Super Sabre] Failed to mark shard as failed after a failed start for index [es_test_2] and shard id [3]
org.elasticsearch.index.shard.recovery.RecoveryFailedException: Index Shard [es_test_2][3]: Recovery failed from [Shadow Slasher][getafix-27084][data][inet[getafix.traveljury.com/127.0.0.2:9301]] into [Super Sabre][getafix-62342][data][inet[getafix.traveljury.com/127.0.0.2:9302]]
    at org.elasticsearch.index.shard.recovery.RecoveryAction.startRecovery(RecoveryAction.java:154)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService$3.run(IndicesClusterStateService.java:325)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
Caused by: org.elasticsearch.ElasticSearchInterruptedException
    at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:97)
    at org.elasticsearch.transport.PlainTransportFuture.txGet(PlainTransportFuture.java:34)
    at org.elasticsearch.index.shard.recovery.RecoveryAction.startRecovery(RecoveryAction.java:124)
    ... 4 more

Sorting on a text field hangs

{ sort: { text_field: {} }} just hangs - no response.

Doesn't matter if I create an explicit mapping or not, or search on one index/type combination or _all

Test script:
curl -XGET 'http://127.0.0.1:9200/_cluster/nodes'
curl -XDELETE 'http://127.0.0.2:9200/es_test/'
curl -XDELETE 'http://127.0.0.2:9200/es_test_2/'
curl -XPOST 'http://127.0.0.2:9200/_flush?refresh=true'
curl -XPUT 'http://127.0.0.2:9200/es_test/'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/'
curl -XPOST 'http://127.0.0.2:9200/_flush?refresh=true'

curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/1?opType=create'  -d '{
   "num" : 2,   "text" : "foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/2?opType=create'  -d '{
   "num" : 3,   "text" : "foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/3?opType=create'  -d '{
   "num" : 4,   "text" : "foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/4?opType=create'  -d '{
   "num" : 5,   "text" : "foo"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/5?opType=create'  -d '{
   "num" : 6,   "text" : "foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/6?opType=create'  -d '{
   "num" : 7,   "text" : "foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/7?opType=create'  -d '{
   "num" : 8,   "text" : "foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/8?opType=create'  -d '{
   "num" : 9,   "text" : "foo bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/9?opType=create'  -d '{
   "num" : 10,   "text" : "foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/10?opType=create'  -d '{
   "num" : 11,   "text" : "foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/11?opType=create'  -d '{
   "num" : 12,   "text" : "foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/12?opType=create'  -d '{
   "num" : 13,   "text" : "foo bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/13?opType=create'  -d '{
   "num" : 14,   "text" : "bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/14?opType=create'  -d '{
   "num" : 15,   "text" : "bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/15?opType=create'  -d '{
   "num" : 16,   "text" : "bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/16?opType=create'  -d '{
   "num" : 17,   "text" : "bar baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/17?opType=create'  -d '{
   "num" : 18,   "text" : "baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/18?opType=create'  -d '{
   "num" : 19,   "text" : "baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/19?opType=create'  -d '{
   "num" : 20,   "text" : "baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/20?opType=create'  -d '{
   "num" : 21,   "text" : "baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/21?opType=create'  -d '{
   "num" : 22,   "text" : "bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/22?opType=create'  -d '{
   "num" : 23,   "text" : "bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/23?opType=create'  -d '{
   "num" : 24,   "text" : "bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/24?opType=create'  -d '{
   "num" : 25,   "text" : "bar"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_1/25?opType=create'  -d '{
   "num" : 26,   "text" : "foo baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test/type_2/26?opType=create'  -d '{
   "num" : 27,   "text" : "foo baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_1/27?opType=create'  -d '{
   "num" : 28,   "text" : "foo baz"}'
curl -XPUT 'http://127.0.0.2:9200/es_test_2/type_2/28?opType=create'  -d '{
   "num" : 29,   "text" : "foo baz"}'


curl -XPOST 'http://127.0.0.2:9200/_flush?refresh=true' 


curl -XGET 'http://127.0.0.2:9200/_all/_search'  -d '
{
   "sort" : {
      "text" : {}
   },
   "query" : {
      "matchAll" : {}
   }
}
'

Query: support negative queries

Allow negative queries such as all the documents that do not match something. This can be expressed by a boolean query with a mustNot clause or in the query string.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.