Git Product home page Git Product logo

mongolastic's Introduction

Mongolastic

Build Status Codacy code quality Docker Pulls,link= mongolastic mongo.java.driver 3.4.2 brightgreen elastic.java.driver 6.2.4 brightgreen license MIT blue

Mongolastic enables you to migrate your datasets from a mongod node to an elasticsearch node and vice versa. Since mongo and elastic servers can run with different characteristics, the tool provides several optional and required features to ably connect them. Mongolastic works with a yaml or json configuration file to begin a migration process. It reads your demand on the file and start syncing data in the specified direction.

How it works

First, you can either pull the corresponding image of the app from Docker Hub

Supported tags and respective Dockerfile links:

or download the latest mongolastic.jar file.

Second, create a yaml or json file which must contain the following structure:

misc:
    dindex:
        name: <string>      (1)
        as: <string>        (2)
    ctype:
        name: <string>      (3)
        as: <string>        (4)
    direction: (em | me)    (5)
    batch: <number>         (6)
    dropDataset: <bool>     (7)
mongo:
    host: <ip-address>      (8)
    port: <number>          (9)
    query: "mongo-query"    (10)
    project: "projection"   (11)
    auth:                   (12)
        user: <string>
        pwd: "password"
        source: <db-name>
        mechanism: ( plain | scram-sha-1 | x509 | gssapi | cr )
elastic:
    host: <ip-address>     (13)
    port: <number>         (14)
    dateFormat: "<format>" (15)
    longToString: <bool>   (16)
    clusterName: <string>  (17)
    auth:                  (18)
        user: <string>
        pwd: "password"
  1. the database/index name to connect to.

  2. another database/index name in which documents will be located in the target service (Optional)

  3. the collection/type name to export.

  4. another collection/type name in which indexed/collected documents will reside in the target service (Optional)

  5. direction of the data transfer. the default direction is me (that is, mongo to elasticsearch). You can skip this option if your data move from mongo to es.

  6. Override the default batch size which is normally 200. (Optional)

  7. configures whether or not the target table should be dropped prior to loading data. Default value is true (Optional)

  8. the name of the host machine where the mongod is running.

  9. the port where the mongod instance is listening.

  10. data will be transferred based on a json mongodb query (Optional)

  11. with 1.4.1, you can manipulate documents that will be migrated from mongo to es based on the $project operator (Optional)

  12. as of v1.3.5, you can access an auth mongodb by giving auth configuration. (Optional)

  13. the name of the host machine where the elastic node is running.

  14. the transport port where the transport module will communicate with the running elastic node. E.g. 9300 for node-to-node communication.

  15. a custom formatter for Date fields rather than the default DateCodec (Optional)

  16. serialize long value as a string for backwards compatibility with other tools (Optional)

  17. connect to a spesific elastic cluster (Optional)

  18. as of v1.3.9, you can access an auth elastic search by giving auth configuration. (Optional)


Alternatively, a JSON file can be specified as a mongolastic configuration file including the same YAML file structure above.

{
	"misc": {
		"dindex": {
			"name": "twitter",
			"as": "media"
		},
		"ctype": {
			"name": "tweets",
			"as": "posts"
		},
		"direction": "me",
		"batch": 400,
		"dropDataset": true
	},
	"mongo": {
		"host": "127.0.0.1",
		"port": 27017,
		"query": "{ lang: 'en' }",
		"project": "{ user:1, name:'$user.name', location: { $substr: [ '$user.location', 10, 15 ] }}",
		"auth": {
			"user": "joe",
			"pwd": "1234",
			"source": "twitter",
			"mechanism": "scram-sha-1"
		}
	},
	"elastic": {
		"host": "127.0.0.1",
		"port": 9300,
		"dateFormat": "yyyy-MM-dd",
		"longToString": true,
		"auth": {
			"user": "joe",
			"pwd": "4321"
		}
	}
}

Example #1

The following files have the same configuration details:

yaml file
misc:
    dindex:
        name: twitter
        as: kodcu
    ctype:
        name: tweets
        as: posts
mongo:
    host: localhost
    port: 27017
    query: "{ 'user.name' : 'kodcu.com'}"
elastic:
    host: localhost
    port: 9300
json file
{
	"misc": {
		"dindex": {
			"name": "twitter",
			"as": "kodcu"
		},
		"ctype": {
			"name": "tweets",
			"as": "posts"
		}
	},
	"mongo": {
		"host": "localhost",
		"port": 27017,
		"query": "{ 'user.name' : 'kodcu.com'}"
	},
	"elastic": {
		"host": "localhost",
		"port": 9300
	}
}

the config says that the transfer direction is from mongodb to elasticsearch, mongolastic first looks at the tweets collection, where the user name is kodcu.com, of the twitter database located on a mongod server running on default host interface and port number. If It finds the corresponding data, It will start copying those into an elasticsearch environment running on default host and transport number. After all, you should see a type called "posts" in an index called "kodcu" in the current elastic node. Why the index and type are different is because "dindex.as" and "ctype.as" options were set, these indicates that your data being transferred exist in posts type of the kodcu index.

After downloading the jar or pulling the image and providing a conf file, you can either run the tool as:

$ java -jar mongolastic.jar -f config.file

or

$ docker run --rm -v $(PWD)/config.file:/config.file --net host ozlerhakan/mongolastic:<tag> config.file

Example #2

Using the project field, you are able to manipulate documents when migrating them from mongodb to elasticsearch. For more examples about the $project operator of the aggregation pipeline, take a look at its documentation.

misc:
    dindex:
        name: twitter
    ctype:
        name: tweets
mongo:
    host: 192.168.10.151
    port: 27017
    project: "{ user: 1, name: '$user.name', location: { $substr: [ '$user.location', 10, 15 ] }}" (1)
elastic:
    host: 192.168.10.152
    port: 9300
  1. the migrated documents will include the user field and contain new fields name and location.

Note
Every attempt of running the tool drops the mentioned db/index in the target environment unless the dropDataset parameter is configured otherwise.

License

Mongolastic is released under MIT.

mongolastic's People

Contributors

chbaranowski avatar hakdogan avatar ozlerhakan avatar san-perfo avatar winder avatar zeusbaba avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mongolastic's Issues

mongolastic with ElasticSearch 6.5.4 / Mongo 4.0

Hello Hakan,

I have tried to use mongolastic with elasticsearch version 6.5.4 and Mongo 4.0, both as Docker containers at my local machine.

my conf file is as follow :
misc:
dindex:
name: gfatest
as: clusterdb
ctype:
name: gfatest
as: elas
direction: em
mongo:
host: 127.0.0.1
port: 27017
elastic:
host: 127.0.0.1
port: 9300
clusterName: docker-cluster

When I run the mongolastic container I get :
`[INFO ] [2019-03-04 10:54:15] [main] [INFO]: -
Config Output:
{elastic=Elastic{host='127.0.0.1', port=9300, clusterName=docker-cluster, dateFormat=null, longToString=false, auth=null}, misc=Misc{batch=200, direction='em', dindex=Namespace{as='clusterdb', name='gfatest'}, ctype=Namespace{as='elas', name='gfatest'}, dropDataset=true}, mongo=Mongo{host='127.0.0.1', port=27017, query='{}', project='null', auth=null}}

[INFO ] [2019-03-04 10:54:15] [main] [INFO]: - no modules loaded
[INFO ] [2019-03-04 10:54:15] [main] [INFO]: - loaded plugin [org.elasticsearch.index.reindex.ReindexPlugin]
[INFO ] [2019-03-04 10:54:15] [main] [INFO]: - loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
[INFO ] [2019-03-04 10:54:15] [main] [INFO]: - loaded plugin [org.elasticsearch.script.mustache.MustachePlugin]
[INFO ] [2019-03-04 10:54:15] [main] [INFO]: - loaded plugin [org.elasticsearch.transport.Netty3Plugin]
[INFO ] [2019-03-04 10:54:15] [main] [INFO]: - loaded plugin [org.elasticsearch.transport.Netty4Plugin]
[INFO ] [2019-03-04 10:54:15] [main] [INFO]: - loaded plugin [org.elasticsearch.xpack.XPackPlugin]
[WARN ] [2019-03-04 10:54:16] [elasticsearch[client][management][T#1]] [WARN]: - Failed to find a usable hardware address from the network interfaces; using random bytes: 3f:fc:94:ab:38:2a:71:18
[INFO ] [2019-03-04 10:54:17] [main] [INFO]: - Cluster created with settings {hosts=[127.0.0.1:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='5000 ms', maxWaitQueueSize=500}
[INFO ] [2019-03-04 10:54:17] [main] [INFO]: - Adding discovered server 127.0.0.1:27017 to client view of cluster
[INFO ] [2019-03-04 10:54:17] [main] [INFO]: - Load duration: 2152ms
Exception in thread "main" NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{zMQOF4MMQ4-Ge3umdq63iA}{127.0.0.1}{127.0.0.1:9300}]]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:344)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:242)
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:366)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:404)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1237)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)
at com.kodcu.provider.ElasticToMongoProvider.getCount(ElasticToMongoProvider.java:39)
at com.kodcu.provider.Provider.transfer(Provider.java:16)
[INFO ] [2019-03-04 10:54:17] [cluster-ClusterId{value='5c7d03d9d02e7d0001c51d28', description='null'}-127.0.0.1:27017] [INFO]: - Exception in monitor thread while connecting to server 127.0.0.1:27017
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.connection.SocketStream.open(SocketStream.java:63) ~[mongolastic.jar:?]
at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:115) ~[mongolastic.jar:?]
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:113) [mongolastic.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_121]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_121]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_121]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_121]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_121]
at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_121]
at com.mongodb.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:57) ~[mongolastic.jar:?]
at com.mongodb.connection.SocketStream.open(SocketStream.java:58) ~[mongolastic.jar:?]
... 3 more
at com.kodcu.main.Mongolastic.proceedService(Mongolastic.java:69)
at java.util.Optional.ifPresent(Optional.java:159)
at com.kodcu.main.Mongolastic.start(Mongolastic.java:58)
at com.kodcu.main.Mongolastic.main(Mongolastic.java:35)`

I have tried version 1.4.2 and latest version of mongolastic but the result remain the same.
I don't know what I'm doing wrong, so I would really appreciate your help !

Thank you in advance.

Error connecting (?) to ES

My config file:

misc:
        dindex:
                name: dbCars
                as: dbcars
        ctype:
                name: Cars
                as: cars
        direction: me
mongo:
        host: 10.11.28.7
        port: 27017
elastic:
        host: 192.168.2.32
        port: 9200
        longToString: true

Error:

0 [main] INFO com.kodcu.config.FileConfiguration  -
Config Output:
{elastic=Elastic{host='192.168.2.32', port=9200, dateFormat=null, longToString=true}, misc=Misc{batch=500, direction='me', dindex=Namespace{as='dbcars', name='dbCars'}, ctype=Namespace{as='cars', name='Cars'}, dropDataset=true}, mongo=Mongo{host='10.11.28.7', port=27017, query='{}', auth=null}}

224 [main] INFO org.elasticsearch.plugins  - [Mad Dog Rassitano] modules [], plugins [], sites []
247 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [force_merge], type [fixed], size [1], queue_size [null]
257 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [percolate], type [fixed], size [4], queue_size [1k]
273 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [fetch_shard_started], type [scaling], min [1], size [8], keep_alive [5m]
274 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [listener], type [fixed], size [2], queue_size [null]
275 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [index], type [fixed], size [4], queue_size [200]
275 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [refresh], type [scaling], min [1], size [2], keep_alive [5m]
275 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [suggest], type [fixed], size [4], queue_size [1k]
276 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [generic], type [cached], keep_alive [30s]
278 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [warmer], type [scaling], min [1], size [2], keep_alive [5m]
279 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [search], type [fixed], size [7], queue_size [1k]
279 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [flush], type [scaling], min [1], size [2], keep_alive [5m]
280 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [fetch_shard_store], type [scaling], min [1], size [8], keep_alive [5m]
280 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
281 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [get], type [fixed], size [4], queue_size [1k]
281 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [bulk], type [fixed], size [4], queue_size [50]
282 [main] DEBUG org.elasticsearch.threadpool  - [Mad Dog Rassitano] creating thread_pool [snapshot], type [scaling], min [1], size [2], keep_alive [5m]
718 [main] DEBUG org.elasticsearch.common.network  - configuration:

lo0
        inet 127.0.0.1 netmask:255.0.0.0 scope:host
        inet6 fe80::1 prefixlen:64 scope:link
        inet6 ::1 prefixlen:128 scope:host
        UP MULTICAST LOOPBACK mtu:16384 index:1

en1
        inet 192.168.2.32 netmask:255.255.255.0 broadcast:192.168.2.255 scope:site
        inet6 fe80::aa86:ddff:feb0:d36d prefixlen:64 scope:link
        hardware A8:86:DD:B0:D3:6D
        UP MULTICAST mtu:1500 index:5

awdl0
        inet6 fe80::84e9:fdff:fee1:4c12 prefixlen:64 scope:link
        hardware 86:E9:FD:E1:4C:12
        UP MULTICAST mtu:1484 index:9

748 [main] DEBUG org.elasticsearch.common.netty  - using gathering [true]
798 [main] DEBUG org.elasticsearch.client.transport  - [Mad Dog Rassitano] node_sampler_interval[5s]
818 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil  - Using select timeout of 500
818 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil  - Epoll-bug workaround enabled = false
850 [main] DEBUG org.elasticsearch.client.transport  - [Mad Dog Rassitano] adding address [{#transport#-1}{192.168.2.32}{192.168.2.32:9200}]
896 [main] DEBUG org.elasticsearch.transport.netty  - [Mad Dog Rassitano] connected to node [{#transport#-1}{192.168.2.32}{192.168.2.32:9200}]
5935 [main] INFO org.elasticsearch.client.transport  - [Mad Dog Rassitano] failed to get node info for {#transport#-1}{192.168.2.32}{192.168.2.32:9200}, disconnecting...
ReceiveTimeoutTransportException[[][192.168.2.32:9200][cluster:monitor/nodes/liveness] request_id [0] timed out after [5004ms]]
    at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
5937 [main] DEBUG org.elasticsearch.transport.netty  - [Mad Dog Rassitano] disconnecting from [{#transport#-1}{192.168.2.32}{192.168.2.32:9200}] due to explicit disconnect call
5941 [elasticsearch[Mad Dog Rassitano][generic][T#1]] DEBUG org.elasticsearch.transport.netty  - [Mad Dog Rassitano] connected to node [{#transport#-1}{192.168.2.32}{192.168.2.32:9200}]
5997 [main] INFO org.mongodb.driver.cluster  - Cluster created with settings {hosts=[10.11.28.7:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='5000 ms', maxWaitQueueSize=500}
5998 [main] INFO org.mongodb.driver.cluster  - Adding discovered server 10.11.28.7:27017 to client view of cluster
6068 [main] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=UNKNOWN, servers=[{address=10.11.28.7:27017, type=UNKNOWN, state=CONNECTING}]
6118 [main] INFO org.mongodb.driver.cluster  - No server chosen by ReadPreferenceServerSelector{readPreference=primaryPreferred} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, all=[ServerDescription{address=10.11.28.7:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 5000 ms before timing out
6730 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-10.11.28.7:27017] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:1, serverValue:802}] to 10.11.28.7:27017
6730 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-10.11.28.7:27017] DEBUG org.mongodb.driver.cluster  - Checking status of 10.11.28.7:27017
6889 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-10.11.28.7:27017] INFO org.mongodb.driver.cluster  - Monitor thread successfully connected to server with description ServerDescription{address=10.11.28.7:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[2, 6, 5]}, minWireVersion=0, maxWireVersion=2, maxDocumentSize=16777216, roundTripTimeNanos=157645062, setName='rs0', canonicalAddress=mongohml:27017, hosts=[mongohml:27018, mongohml:27017, mongohml:27019], passives=[], arbiters=[], primary='mongohml:27017', tagSet=TagSet{[]}, electionId=null, setVersion=3}
6890 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-10.11.28.7:27017] INFO org.mongodb.driver.cluster  - Discovered cluster type of REPLICA_SET
6891 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-10.11.28.7:27017] INFO org.mongodb.driver.cluster  - Adding discovered server mongohml:27018 to client view of cluster
6893 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-10.11.28.7:27017] INFO org.mongodb.driver.cluster  - Adding discovered server mongohml:27017 to client view of cluster
6893 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-10.11.28.7:27017] INFO org.mongodb.driver.cluster  - Adding discovered server mongohml:27019 to client view of cluster
6894 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-10.11.28.7:27017] INFO org.mongodb.driver.cluster  - Server 10.11.28.7:27017 is no longer a member of the replica set.  Removing from client view of cluster.
6896 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-10.11.28.7:27017] INFO org.mongodb.driver.cluster  - Canonical address mongohml:27017 does not match server address.  Removing 10.11.28.7:27017 from client view of cluster
6896 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-10.11.28.7:27017] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=REPLICA_SET, servers=[{address=mongohml:27017, type=UNKNOWN, state=CONNECTING}, {address=mongohml:27018, type=UNKNOWN, state=CONNECTING}, {address=mongohml:27019, type=UNKNOWN, state=CONNECTING}]
6897 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-10.11.28.7:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:1, serverValue:802}
8610 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27019] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:4, serverValue:810}] to mongohml:27019
8610 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27018] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:2, serverValue:803}] to mongohml:27018
8610 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27017] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:3, serverValue:803}] to mongohml:27017
8611 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27018] DEBUG org.mongodb.driver.cluster  - Checking status of mongohml:27018
8611 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27019] DEBUG org.mongodb.driver.cluster  - Checking status of mongohml:27019
8612 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27017] DEBUG org.mongodb.driver.cluster  - Checking status of mongohml:27017
8770 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27019] INFO org.mongodb.driver.cluster  - Monitor thread successfully connected to server with description ServerDescription{address=mongohml:27019, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[2, 6, 5]}, minWireVersion=0, maxWireVersion=2, maxDocumentSize=16777216, roundTripTimeNanos=157484839, setName='rs0', canonicalAddress=mongohml:27019, hosts=[mongohml:27018, mongohml:27017, mongohml:27019], passives=[], arbiters=[], primary='mongohml:27017', tagSet=TagSet{[]}, electionId=null, setVersion=3}
8770 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27017] INFO org.mongodb.driver.cluster  - Monitor thread successfully connected to server with description ServerDescription{address=mongohml:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[2, 6, 5]}, minWireVersion=0, maxWireVersion=2, maxDocumentSize=16777216, roundTripTimeNanos=157568194, setName='rs0', canonicalAddress=mongohml:27017, hosts=[mongohml:27018, mongohml:27017, mongohml:27019], passives=[], arbiters=[], primary='mongohml:27017', tagSet=TagSet{[]}, electionId=null, setVersion=3}
8770 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27018] INFO org.mongodb.driver.cluster  - Monitor thread successfully connected to server with description ServerDescription{address=mongohml:27018, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[2, 6, 5]}, minWireVersion=0, maxWireVersion=2, maxDocumentSize=16777216, roundTripTimeNanos=157648746, setName='rs0', canonicalAddress=mongohml:27018, hosts=[mongohml:27018, mongohml:27017, mongohml:27019], passives=[], arbiters=[], primary='mongohml:27017', tagSet=TagSet{[]}, electionId=null, setVersion=3}
8772 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27019] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=REPLICA_SET, servers=[{address=mongohml:27017, type=UNKNOWN, state=CONNECTING}, {address=mongohml:27018, type=UNKNOWN, state=CONNECTING}, {address=mongohml:27019, type=REPLICA_SET_SECONDARY, roundTripTime=157,5 ms, state=CONNECTED}]
8773 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27018] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=REPLICA_SET, servers=[{address=mongohml:27017, type=UNKNOWN, state=CONNECTING}, {address=mongohml:27018, type=REPLICA_SET_SECONDARY, roundTripTime=157,6 ms, state=CONNECTED}, {address=mongohml:27019, type=REPLICA_SET_SECONDARY, roundTripTime=157,5 ms, state=CONNECTED}]
8775 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27017] INFO org.mongodb.driver.cluster  - Setting max set version to 3 from replica set primary mongohml:27017
8775 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27017] INFO org.mongodb.driver.cluster  - Discovered replica set primary mongohml:27017
8776 [cluster-ClusterId{value='5790d7d7ecfbab71ef9eda9e', description='null'}-mongohml:27017] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=REPLICA_SET, servers=[{address=mongohml:27017, type=REPLICA_SET_PRIMARY, roundTripTime=157,6 ms, state=CONNECTED}, {address=mongohml:27018, type=REPLICA_SET_SECONDARY, roundTripTime=157,6 ms, state=CONNECTED}, {address=mongohml:27019, type=REPLICA_SET_SECONDARY, roundTripTime=157,5 ms, state=CONNECTED}]
9405 [main] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:5, serverValue:811}] to mongohml:27019
9411 [main] DEBUG org.mongodb.driver.protocol.command  - Sending command {count : BsonString{value='Cars'}} to database dbCars on connection [connectionId{localValue:5, serverValue:811}] to server mongohml:27019
9574 [main] DEBUG org.mongodb.driver.protocol.command  - Command execution completed
9574 [main] INFO com.kodcu.provider.MongoToElasticProvider  - Mongo collection count: 261834
9578 [main] INFO com.kodcu.main.Mongolastic  - Load duration: 9576ms
Exception in thread "main" NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{192.168.2.32}{192.168.2.32:9200}]]
    at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:290)
    at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:207)
    at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
    at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:288)
    at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)
    at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1226)
    at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:86)
    at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:56)
    at com.kodcu.service.ElasticBulkService.dropDataSet(ElasticBulkService.java:94)
    at com.kodcu.provider.Provider.transfer(Provider.java:22)
    at com.kodcu.main.Mongolastic.proceedService(Mongolastic.java:61)
    at java.util.Optional.ifPresent(Optional.java:159)
    at com.kodcu.main.Mongolastic.start(Mongolastic.java:50)
    at com.kodcu.main.Mongolastic.main(Mongolastic.java:38)

Support for elasticsearch 6.1

Getting the following error in elasticsearch while trying to transfer data,

java.lang.IllegalStateException: Received message from unsupported version: [5.3.0] minimal compatible version is: [5.6.0] at org.elasticsearch.transport.TcpTransport.ensureVersionCompatibility(TcpTransport.java:1428) ~[elasticsearch-6.1.1.jar:6.1.1] at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1375) ~[elasticsearch-6.1.1.jar:6.1.1] at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:64) ~[transport-netty4-6.1.1.jar:6.1.1] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.13.Final.jar:4.1.13.Final] at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) [netty-codec-4.1.13.Final.jar:4.1.13.Final] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) [netty-codec-4.1.13.Final.jar:4.1.13.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) [netty-codec-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]

Will updating java elastic driver from 5.3 fix problem?

Support Transferring Multiple collections to ES

Currently, the mongolastic isn't appear to have a support to transfer more than one than one collection to elastic search. How can we list down the collection / type names which needs to be imported into elastic search?

mongodb

com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongo-java-driver-3.12.11.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[mongo-java-driver-3.12.11.jar:na]
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongo-java-driver-3.12.11.jar:na]
at java.base/java.lang.Thread.run(Thread.java:830) ~[na:na]
Caused by: java.net.ConnectException: Connection refused: no further information
at java.base/sun.nio.ch.Net.pollConnect(Native Method) ~[na:na]
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:579) ~[na:na]
at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:549) ~[na:na]
at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597) ~[na:na]
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:339) ~[na:na]
at java.base/java.net.Socket.connect(Socket.java:603) ~[na:na]
at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongo-java-driver-3.12.11.jar:na]
at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongo-java-driver-3.12.11.jar:na]
at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongo-java-driver-3.12.11.jar:na]
... 3 common frames omitted

2022-08-08 21:45:47.734 INFO 3364 --- [127.0.0.1:27019] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server 127.0.0.1:27019

com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongo-java-driver-3.12.11.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[mongo-java-driver-3.12.11.jar:na]
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongo-java-driver-3.12.11.jar:na]
at java.base/java.lang.Thread.run(Thread.java:830) ~[na:na]
Caused by: java.net.ConnectException: Connection refused: no further information
at java.base/sun.nio.ch.Net.pollConnect(Native Method) ~[na:na]
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:579) ~[na:na]
at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:549) ~[na:na]
at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597) ~[na:na]
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:339) ~[na:na]
at java.base/java.net.Socket.connect(Socket.java:603) ~[na:na]
at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongo-java-driver-3.12.11.jar:na]
at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongo-java-driver-3.12.11.jar:na]
at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongo-java-driver-3.12.11.jar:na]
... 3 common frames omitted

2022-08-08 21:45:47.739 INFO 3364 --- [127.0.0.1:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server 127.0.0.1:27017

com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongo-java-driver-3.12.11.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[mongo-java-driver-3.12.11.jar:na]
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongo-java-driver-3.12.11.jar:na]
at java.base/java.lang.Thread.run(Thread.java:830) ~[na:na]
Caused by: java.net.ConnectException: Connection refused: no further information
at java.base/sun.nio.ch.Net.pollConnect(Native Method) ~[na:na]
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:579) ~[na:na]
at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:549) ~[na:na]
at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597) ~[na:na]
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:339) ~[na:na]
at java.base/java.net.Socket.connect(Socket.java:603) ~[na:na]
at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongo-java-driver-3.12.11.jar:na]
at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongo-java-driver-3.12.11.jar:na]
at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongo-java-driver-3.12.11.jar:na]
... 3 common frames omitted

2022-08-08 21:45:47.908 INFO 3364 --- [ main] com.join.service.StudentServiceTest : Started StudentServiceTest in 4.857 seconds (JVM running for 6.117)
2022-08-08 21:45:48.520 INFO 3364 --- [ main] org.mongodb.driver.cluster : No server chosen by com.mongodb.client.internal.MongoClientDelegate$1@3f6f9cef from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=127.0.0.1:27019, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: no further information}}, ServerDescription{address=127.0.0.1:27018, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: no further information}}, ServerDescription{address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: no further information}}]}. Waiting for 30000 ms before timing out

org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@3f6f9cef. Client view of cluster state is {type=REPLICA_SET, servers=[{address=127.0.0.1:27019, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: no further information}}, {address=127.0.0.1:27018, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: no further information}}, {address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: no further information}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@3f6f9cef. Client view of cluster state is {type=REPLICA_SET, servers=[{address=127.0.0.1:27019, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: no further information}}, {address=127.0.0.1:27018, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: no further information}}, {address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: no further information}}]

at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:95)
at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:3044)
at org.springframework.data.mongodb.core.MongoTemplate.executeFindOneInternal(MongoTemplate.java:2939)
at org.springframework.data.mongodb.core.MongoTemplate.doFindOne(MongoTemplate.java:2615)
at org.springframework.data.mongodb.core.MongoTemplate.doFindOne(MongoTemplate.java:2585)
at org.springframework.data.mongodb.core.MongoTemplate.findById(MongoTemplate.java:922)
at org.springframework.data.mongodb.repository.support.SimpleMongoRepository.findById(SimpleMongoRepository.java:132)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at org.springframework.data.repository.core.support.RepositoryMethodInvoker$RepositoryFragmentMethodInvoker.lambda$new$0(RepositoryMethodInvoker.java:289)
at org.springframework.data.repository.core.support.RepositoryMethodInvoker.doInvoke(RepositoryMethodInvoker.java:137)
at org.springframework.data.repository.core.support.RepositoryMethodInvoker.invoke(RepositoryMethodInvoker.java:121)
at org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments.invoke(RepositoryComposition.java:530)
at org.springframework.data.repository.core.support.RepositoryComposition.invoke(RepositoryComposition.java:286)
at org.springframework.data.repository.core.support.RepositoryFactorySupport$ImplementationMethodExecutionInterceptor.invoke(RepositoryFactorySupport.java:640)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.doInvoke(QueryExecutorMethodInterceptor.java:164)
at org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.invoke(QueryExecutorMethodInterceptor.java:139)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:81)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215)
at com.sun.proxy.$Proxy70.findById(Unknown Source)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:137)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215)
at com.sun.proxy.$Proxy70.findById(Unknown Source)
at com.join.service.StudentService.findOneById(StudentService.java:32)
at com.join.service.StudentServiceTest.testFindOneById(StudentServiceTest.java:41)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1507)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1507)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:71)
at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)

Caused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@3f6f9cef. Client view of cluster state is {type=REPLICA_SET, servers=[{address=127.0.0.1:27019, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: no further information}}, {address=127.0.0.1:27018, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: no further information}}, {address=127.0.0.1:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused: no further information}}]
at com.mongodb.internal.connection.BaseCluster.createTimeoutException(BaseCluster.java:408)
at com.mongodb.internal.connection.BaseCluster.selectServer(BaseCluster.java:123)
at com.mongodb.internal.connection.AbstractMultiServerCluster.selectServer(AbstractMultiServerCluster.java:54)
at com.mongodb.client.internal.MongoClientDelegate.getConnectedClusterDescription(MongoClientDelegate.java:157)
at com.mongodb.client.internal.MongoClientDelegate.createClientSession(MongoClientDelegate.java:105)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.getClientSession(MongoClientDelegate.java:287)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:191)
at com.mongodb.client.internal.FindIterableImpl.first(FindIterableImpl.java:211)
at org.springframework.data.mongodb.core.MongoTemplate$FindOneCallback.doInCollection(MongoTemplate.java:3087)
at org.springframework.data.mongodb.core.MongoTemplate$FindOneCallback.doInCollection(MongoTemplate.java:3058)
at org.springframework.data.mongodb.core.MongoTemplate.executeFindOneInternal(MongoTemplate.java:2936)
... 104 more

this is my yaml

spring:
data:
mongodb:
database: test
uri: mongodb://192.168.133.248:27017,192.168.133.248:27018/test?slaveOk=true&replicaSet=myrs

this is my xml

org.springframework.boot
spring-boot-starter-data-mongodb
2.7.2

Not Executing in window cmd

I was trying to transfer the elastic data into mongodb , I tried with your jar mongoelastic.jar with below config file and command. It is not getting executed. Please check and let me know the details if I'm missing anything.

#config.json

{
"misc": {
"dindex": {
"name": "elastictest"

	},
	"ctype": {
		"name": "mongotest"
		
	}
},
"mongo": {
	"host": "localhost",
	"port": 27017
	
},
"elastic": {
	"host": "127.0.0.1",
	"port": 9200
	
}

}

Command using to execute it. I am sure the jar and config file is under /user/sachin in window .

C:\Users\sachin>java -jar mongolastic.jar -f config.json

error getting

2020-11-20 15:50:11,472 main ERROR Unable to invoke factory method in class class org.apache.logging.log4j.core.appender.ConsoleAppender for element Console. java.lang.IllegalStateException: No factory method found for class org.apache.logging.log4j.core.appender.ConsoleAppender
at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.findFactoryMethod(PluginBuilder.java:224)
at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:130)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:952)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:892)
at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:884)
at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:508)
at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:232)
at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:244)
at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:545)
at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:617)
at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:634)
at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:229)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:152)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45)
at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194)
at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:122)
at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:46)
at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:358)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at com.kodcu.main.Mongolastic.(Mongolastic.java:23)

2020-11-20 15:50:11,485 main ERROR Null object returned for Console in Appenders.
2020-11-20 15:50:11,491 main ERROR Unable to locate appender "STDOUT" for logger config "root"

cannot transfer data to elasticsearch 7.4 from mongo

Here is my config file

misc:
    dindex:
        name: rem
    ctype:
        name: calls
mongo:
    host: localhost
    port: 27017
elastic:
    host: localhost
    port: 9300
    clusterName: elasticsearch

I am getting this error in ES console

exception caught on transport layer [Netty4TcpChannel{localAddress=/127.0.0.1:9300, remoteAddress=/127.0.0.1:60695}], closing connection
java.lang.IllegalStateException: Received message from unsupported version: [6.1.1] minimal compatible version is: [6.8.0]
	at org.elasticsearch.transport.InboundMessage.ensureVersionCompatibility(InboundMessage.java:137) ~[elasticsearch-7.3.1.jar:7.3.1]
	at org.elasticsearch.transport.InboundMessage.access$000(InboundMessage.java:39) ~[elasticsearch-7.3.1.jar:7.3.1]
	at org.elasticsearch.transport.InboundMessage$Reader.deserialize(InboundMessage.java:76) ~[elasticsearch-7.3.1.jar:7.3.1]
	at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:116) ~[elasticsearch-7.3.1.jar:7.3.1]
	at org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:105) ~[elasticsearch-7.3.1.jar:7.3.1]
	at org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:660) [elasticsearch-7.3.1.jar:7.3.1]
	at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:62) [transport-netty4-client-7.3.1.jar:7.3.1]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [netty-codec-4.1.36.Final.jar:4.1.36.Final]

And this while running the jar

Exception in thread "main" NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{MBfA6xa-TWicelM-f8GEBQ}{localhost}{127.0.0.1:9300}]]
	at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347)
	at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245)
	at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60)
	at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:360)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:405)
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:394)
	at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1247)
	at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:46)
	at com.kodcu.service.ElasticBulkService.dropDataSet(ElasticBulkService.java:90)
	at com.kodcu.provider.Provider.transfer(Provider.java:21)
	at com.kodcu.main.Mongolastic.proceedService(Mongolastic.java:69)
	at java.util.Optional.ifPresent(Optional.java:159)
	at com.kodcu.main.Mongolastic.start(Mongolastic.java:58)
	at com.kodcu.main.Mongolastic.main(Mongolastic.java:35)

Error with getting info from mongo

0 [main] INFO com.kodcu.config.FileConfiguration  - 
Config Output:
{elastic=Elastic{host='localhost', port=9300, dateFormat=null, longToString=false}, misc=Misc{batch=200, direction='me', dindex=Namespace{as='yapdns', name='yapdns'}, ctype=Namespace{as='dnsinfo', name='dnsinfo'}, dropDataset=true}, mongo=Mongo{host='localhost', port=27017, query='{}', auth=com.kodcu.config.yml.Auth@5577140b}}

165 [main] INFO org.elasticsearch.plugins  - [Iron Man 2020] modules [], plugins [], sites []
176 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [force_merge], type [fixed], size [1], queue_size [null]
182 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [percolate], type [fixed], size [2], queue_size [1k]
192 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [fetch_shard_started], type [scaling], min [1], size [4], keep_alive [5m]
192 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [listener], type [fixed], size [1], queue_size [null]
192 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [index], type [fixed], size [2], queue_size [200]
193 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [refresh], type [scaling], min [1], size [1], keep_alive [5m]
193 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [suggest], type [fixed], size [2], queue_size [1k]
193 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [generic], type [cached], keep_alive [30s]
194 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [warmer], type [scaling], min [1], size [1], keep_alive [5m]
194 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [search], type [fixed], size [4], queue_size [1k]
194 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [flush], type [scaling], min [1], size [1], keep_alive [5m]
194 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [fetch_shard_store], type [scaling], min [1], size [4], keep_alive [5m]
194 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
194 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [get], type [fixed], size [2], queue_size [1k]
194 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [bulk], type [fixed], size [2], queue_size [50]
195 [main] DEBUG org.elasticsearch.threadpool  - [Iron Man 2020] creating thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive [5m]
525 [main] DEBUG org.elasticsearch.common.network  - configuration:

lo
        inet 127.0.0.1 netmask:255.0.0.0 scope:host
        inet6 ::1 prefixlen:128 scope:host
        UP LOOPBACK mtu:65536 index:1

ens33
        inet 192.168.79.136 netmask:255.255.255.0 broadcast:192.168.79.255 scope:site
        inet6 fe80::44ac:303c:270a:57cb prefixlen:64 scope:link
        hardware 00:0C:29:31:1C:6E
        UP MULTICAST mtu:1500 index:2

545 [main] DEBUG org.elasticsearch.common.netty  - using gathering [true]
577 [main] DEBUG org.elasticsearch.client.transport  - [Iron Man 2020] node_sampler_interval[5s]
595 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil  - Using select timeout of 500
595 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil  - Epoll-bug workaround enabled = false
616 [main] DEBUG org.elasticsearch.client.transport  - [Iron Man 2020] adding address [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
646 [main] DEBUG org.elasticsearch.transport.netty  - [Iron Man 2020] connected to node [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
675 [main] DEBUG org.elasticsearch.transport.netty  - [Iron Man 2020] connected to node [{Main Elastic}{OWIZkFPKRa-BpIu4pAjV8w}{127.0.0.1}{localhost/127.0.0.1:9300}{master=true}]
721 [main] INFO org.mongodb.driver.cluster  - Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='5000 ms', maxWaitQueueSize=500}
722 [main] INFO org.mongodb.driver.cluster  - Adding discovered server localhost:27017 to client view of cluster
767 [main] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]
789 [main] INFO org.mongodb.driver.cluster  - No server chosen by ReadPreferenceServerSelector{readPreference=primaryPreferred} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, all=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 5000 ms before timing out
884 [cluster-ClusterId{value='5772bbf5a886531ce72d0726', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:1}
885 [cluster-ClusterId{value='5772bbf5a886531ce72d0726', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster  - Exception in monitor thread while connecting to server localhost:27017
com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=PLAIN, userName='admin', source='yapdns', password=<hidden>, mechanismProperties={}}
    at com.mongodb.connection.SaslAuthenticator.authenticate(SaslAuthenticator.java:61)
    at com.mongodb.connection.InternalStreamConnectionInitializer.authenticateAll(InternalStreamConnectionInitializer.java:99)
    at com.mongodb.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:44)
    at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:115)
    at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:128)
    at java.lang.Thread.run(Thread.java:745)
Caused by: com.mongodb.MongoCommandException: Command failed with error 59: 'no such cmd: saslStart' on server localhost:27017. The full response is { "ok" : 0.0, "errmsg" : "no such cmd: saslStart", "code" : 59, "bad cmd" : { "saslStart" : 1, "mechanism" : "PLAIN", "payload" : { "$binary" : "YWRtaW4AYWRtaW4AYWRtaW4xMjM=", "$type" : "0" } } }
    at com.mongodb.connection.CommandHelper.createCommandFailureException(CommandHelper.java:170)
    at com.mongodb.connection.CommandHelper.receiveCommandResult(CommandHelper.java:123)
    at com.mongodb.connection.CommandHelper.executeCommand(CommandHelper.java:32)
    at com.mongodb.connection.SaslAuthenticator.sendSaslStart(SaslAuthenticator.java:95)
    at com.mongodb.connection.SaslAuthenticator.authenticate(SaslAuthenticator.java:45)
    ... 5 more
891 [cluster-ClusterId{value='5772bbf5a886531ce72d0726', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=PLAIN, userName='admin', source='yapdns', password=<hidden>, mechanismProperties={}}}, caused by {com.mongodb.MongoCommandException: Command failed with error 59: 'no such cmd: saslStart' on server localhost:27017. The full response is { "ok" : 0.0, "errmsg" : "no such cmd: saslStart", "code" : 59, "bad cmd" : { "saslStart" : 1, "mechanism" : "PLAIN", "payload" : { "$binary" : "YWRtaW4AYWRtaW4AYWRtaW4xMjM=", "$type" : "0" } } }}}]
1396 [cluster-ClusterId{value='5772bbf5a886531ce72d0726', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:2}
1901 [cluster-ClusterId{value='5772bbf5a886531ce72d0726', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:3}
2406 [cluster-ClusterId{value='5772bbf5a886531ce72d0726', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:4}
2910 [cluster-ClusterId{value='5772bbf5a886531ce72d0726', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:5}
3418 [cluster-ClusterId{value='5772bbf5a886531ce72d0726', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:6}
3922 [cluster-ClusterId{value='5772bbf5a886531ce72d0726', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:7}
4427 [cluster-ClusterId{value='5772bbf5a886531ce72d0726', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:8}
4931 [cluster-ClusterId{value='5772bbf5a886531ce72d0726', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:9}
5435 [cluster-ClusterId{value='5772bbf5a886531ce72d0726', description='null'}-localhost:27017] DEBUG org.mongodb.driver.connection  - Closing connection connectionId{localValue:10}
5791 [main] INFO com.kodcu.main.Mongolastic  - Load duration: 5790ms
Exception in thread "main" com.mongodb.MongoTimeoutException: Timed out after 5000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primaryPreferred}. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=PLAIN, userName='admin', source='yapdns', password=<hidden>, mechanismProperties={}}}, caused by {com.mongodb.MongoCommandException: Command failed with error 59: 'no such cmd: saslStart' on server localhost:27017. The full response is { "ok" : 0.0, "errmsg" : "no such cmd: saslStart", "code" : 59, "bad cmd" : { "saslStart" : 1, "mechanism" : "PLAIN", "payload" : { "$binary" : "YWRtaW4AYWRtaW4AYWRtaW4xMjM=", "$type" : "0" } } }}}]
    at com.mongodb.connection.BaseCluster.createTimeoutException(BaseCluster.java:369)
    at com.mongodb.connection.BaseCluster.selectServer(BaseCluster.java:101)
    at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:75)
    at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:71)
    at com.mongodb.binding.ClusterBinding.getReadConnectionSource(ClusterBinding.java:63)
    at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:201)
    at com.mongodb.operation.CountOperation.execute(CountOperation.java:206)
    at com.mongodb.operation.CountOperation.execute(CountOperation.java:53)
    at com.mongodb.Mongo.execute(Mongo.java:772)
    at com.mongodb.Mongo$2.execute(Mongo.java:759)
    at com.mongodb.MongoCollectionImpl.count(MongoCollectionImpl.java:185)
    at com.mongodb.MongoCollectionImpl.count(MongoCollectionImpl.java:170)
    at com.kodcu.provider.MongoToElasticProvider.getCount(MongoToElasticProvider.java:33)
    at com.kodcu.provider.Provider.transfer(Provider.java:17)
    at com.kodcu.main.Mongolastic.proceedService(Mongolastic.java:61)
    at java.util.Optional.ifPresent(Optional.java:159)
    at com.kodcu.main.Mongolastic.start(Mongolastic.java:50)
    at com.kodcu.main.Mongolastic.main(Mongolastic.java:38)

Keep stumbling across this error, this is what my yaml file looks like:

misc:
    dindex:
        name: yapdns
        as: yapdns
    ctype:
        name: dnsinfo
        as: dnsinfo
    direction: me
mongo:
    host: localhost
    port: 27017
    auth:
        user: admin
        pwd: "xxxxxx"
        source: yapdns
        mechanism: plain
elastic:
    host: localhost
port: 9300

And I think the settings are correct. Could it be a compatibility issue?

Distant connection

Hi !

I tried to use your plugin on ubuntu 14.04 with elasticsearch 2.3.1.
Everywhere i find the syntaxe for a localhost mongodb, but what is the syntax for a distant mongodb with authentication and connection on admin db ?

I always have this message when I launch : java -jar mongolastic.jar -f config_mongo.file

5837 [main] INFO org.elasticsearch.client.transport - [MODOK] failed to get node info for {#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9200}, disconnecting...

my elasticsearch is running, verify with : curl -X GET 'http://localhost:9200'

{
"name" : "Alex Hayden",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.3.1",
....
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}

Thank you !

After run mongolastic, no data are transfer from mongodb to elasticsearch

My config file:

misc:
    dindex:
        name: productstore02
    ctype:
        name: Execution
mongo:
    host: win-8727iunk90s 
    port: 27017
elastic:
    host: bpmznsvt04
    port: 9300

Config Output:
{elastic=Elastic{host='bpmznsvt04', port=9300, clusterName=null, dateFormat=null, longToString=false, auth=null}, misc=Misc{batch=200, direction='me', dindex=Namespace{as='my-index', name='productstore02'}, ctype=Namespace{as='null', name='Execution'}, dropDataset=true}, mongo=Mongo{host='win-8727iunk90s', port=27017, query='{}', project='null', auth=null}}

[INFO ] [2017-04-14 18:25:02] [main] [INFO]: - no modules loaded
[INFO ] [2017-04-14 18:25:02] [main] [INFO]: - loaded plugin [org.elasticsearch.index.reindex.ReindexPlugin]
[INFO ] [2017-04-14 18:25:02] [main] [INFO]: - loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
[INFO ] [2017-04-14 18:25:02] [main] [INFO]: - loaded plugin [org.elasticsearch.script.mustache.MustachePlugin]
[INFO ] [2017-04-14 18:25:02] [main] [INFO]: - loaded plugin [org.elasticsearch.transport.Netty3Plugin]
[INFO ] [2017-04-14 18:25:02] [main] [INFO]: - loaded plugin [org.elasticsearch.transport.Netty4Plugin]
[INFO ] [2017-04-14 18:25:02] [main] [INFO]: - loaded plugin [org.elasticsearch.xpack.XPackPlugin]
[WARN ] [2017-04-14 18:25:04] [elasticsearch[client][transport_client_boss][T#1]] [WARN]: - Transport response handler not found of id [1]
[WARN ] [2017-04-14 18:25:04] [elasticsearch[client][transport_client_boss][T#2]] [WARN]: - Transport response handler not found of id [3]
[INFO ] [2017-04-14 18:25:04] [main] [INFO]: - Cluster created with settings {hosts=[win-8727iunk90s:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='5000 ms', maxWaitQueueSize=500}
[INFO ] [2017-04-14 18:25:04] [main] [INFO]: - Adding discovered server win-8727iunk90s:27017 to client view of cluster
[INFO ] [2017-04-14 18:25:04] [main] [INFO]: - No server chosen by ReadPreferenceServerSelector{readPreference=ReadPreference{name=primaryPreferred}} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=win-8727iunk90s:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 5000 ms before timing out
[INFO ] [2017-04-14 18:25:04] [cluster-ClusterId{value='58f0a3803a75c75a33cf00a5', description='null'}-win-8727iunk90s:27017] [INFO]: - Opened connection [connectionId{localValue:1, serverValue:17}] to win-8727iunk90s:27017
[INFO ] [2017-04-14 18:25:04] [cluster-ClusterId{value='58f0a3803a75c75a33cf00a5', description='null'}-win-8727iunk90s:27017] [INFO]: - Monitor thread successfully connected to server with description ServerDescription{address=win-8727iunk90s:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 4, 3]}, minWireVersion=0, maxWireVersion=5, maxDocumentSize=16777216, roundTripTimeNanos=7228015}
[INFO ] [2017-04-14 18:25:04] [cluster-ClusterId{value='58f0a3803a75c75a33cf00a5', description='null'}-win-8727iunk90s:27017] [INFO]: - Discovered cluster type of STANDALONE
[INFO ] [2017-04-14 18:25:04] [main] [INFO]: - Opened connection [connectionId{localValue:2, serverValue:18}] to win-8727iunk90s:27017
[INFO ] [2017-04-14 18:25:04] [main] [INFO]: - Mongo collection count: 121468
[INFO ] [2017-04-14 18:25:05] [main] [INFO]: - Transferring data began to elasticsearch.
[INFO ] [2017-04-14 18:25:08] [main] [INFO]: - Transferring data began to elasticsearch.
[INFO ] [2017-04-14 18:25:08] [main] [INFO]: - Closed connection [connectionId{localValue:2, serverValue:18}] to win-8727iunk90s:27017 because the pool has been closed.
[INFO ] [2017-04-14 18:25:10] [main] [INFO]: - Load duration: 8533ms

Anything I have configured incorrect? Thanks.

ES to mongo transfer issue

Hi, it looks like ElasticToMongoProvider (in buildJSONContent) is not shifting the scroll window.
It might be pulling and transferring the same data over and over again to the target mongo database.

Invalid signature file digest for Manifest main attributes

Hello,

Thanks for this tool.
When I launch docker run --rm -v /opt/ELASTICSEARCH/mongolastic/test.yaml:/test.yaml ozlerhakan/mongolastic test.yaml, i have the following error:

Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.SecurityException: Invalid signature file digest for Manifest main attributes
	at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:314)
	at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:268)
	at java.util.jar.JarVerifier.processEntry(JarVerifier.java:316)
	at java.util.jar.JarVerifier.update(JarVerifier.java:228)
	at java.util.jar.JarFile.initializeVerifier(JarFile.java:383)
	at java.util.jar.JarFile.getInputStream(JarFile.java:450)
	at sun.misc.URLClassPath$JarLoader$2.getInputStream(URLClassPath.java:940)
	at sun.misc.Resource.cachedInputStream(Resource.java:77)
	at sun.misc.Resource.getByteBuffer(Resource.java:160)
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:454)
	at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)

Can you help me?

Thanks,

SPI with name Lucene50 does not exist

Won't run :

An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'Lucene50' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath. The current classpath supports the following names [es090, completion090, XBloomFilter]

config type: misc.direction: em

NoNodeAvailableException with ES-7.6.2

Hi Hakan,
I am using Elasticsearch: 7.6.2
Mongo: 4.2.6

Question: would this plugin work for es version above as well??
2) are es end thrown exception(as mentioned below) and noNodeAvailableException on jar cmd related?
Getting NoNodeAvailable exception:

Exception in thread "main" NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{oEcebCdiT66XkdKa2jThjw}{192.168.0.102}{192.168.0.102:9300}]]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245)
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:360)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:405)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:394)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1247)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:46)
at com.kodcu.service.ElasticBulkService.dropDataSet(ElasticBulkService.java:90)
at com.kodcu.provider.Provider.transfer(Provider.java:21)
at com.kodcu.main.Mongolastic.proceedService(Mongolastic.java:69)
at java.util.Optional.ifPresent(Unknown Source)
at com.kodcu.main.Mongolastic.start(Mongolastic.java:58)
at com.kodcu.main.Mongolastic.main(Mongolastic.java:35)

ALso on local ES end:
[WARN ][o.e.t.TcpTransport ] [703226260L] exception caught on transport layer [Netty4TcpChannel{localAddress=/192.168.0.102:9300, remoteAddress=/192.168.0.102:59135}], closing connection
java.lang.IllegalStateException: Received message from unsupported version: [6.1.1] minimal compatible version is: [6.8.0]
at org.elasticsearch.transport.InboundMessage.ensureVersionCompatibility(InboundMessage.java:152) ~[elasticsearch-7.6.2.jar:7.6.2]

Best Regards,
Tarun

Disconnecting from ES

Hi i'm with ES 2.4, mongo3.0.3 on debian 7, here is the configuration file :

misc:
 dindex:
  name: valueable_dev
 ctype:
  name: product
mongo:
 host: localhost
 port: 27017
elastic:
 host: localhost
 port: 9300

I get this :

root@wheezy:/home/jp/Téléchargements# java -jar mongolastic.jar -f mongo_to_elastic.yml 
0 [main] INFO com.kodcu.config.FileConfiguration  - 
Config Output:
{elastic=Elastic{host='localhost', port=9300, clusterName=null, dateFormat=null, longToString=false, auth=null}, misc=Misc{batch=200, direction='me', dindex=Namespace{as='valueable_dev', name='valueable_dev'}, ctype=Namespace{as='product', name='product'}, dropDataset=true}, mongo=Mongo{host='localhost', port=27017, query='{}', auth=null}}

310 [main] INFO org.elasticsearch.plugins  - [Unseen] modules [], plugins [], sites []
350 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [force_merge], type [fixed], size [1], queue_size [null]
365 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [percolate], type [fixed], size [1], queue_size [1k]
398 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [fetch_shard_started], type [scaling], min [1], size [2], keep_alive [5m]
399 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [listener], type [fixed], size [1], queue_size [null]
405 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [index], type [fixed], size [1], queue_size [200]
408 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [refresh], type [scaling], min [1], size [1], keep_alive [5m]
409 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [suggest], type [fixed], size [1], queue_size [1k]
409 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [generic], type [cached], keep_alive [30s]
413 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [warmer], type [scaling], min [1], size [1], keep_alive [5m]
414 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [search], type [fixed], size [2], queue_size [1k]
414 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [flush], type [scaling], min [1], size [1], keep_alive [5m]
414 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [fetch_shard_store], type [scaling], min [1], size [2], keep_alive [5m]
415 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
416 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [get], type [fixed], size [1], queue_size [1k]
417 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [bulk], type [fixed], size [1], queue_size [50]
417 [main] DEBUG org.elasticsearch.threadpool  - [Unseen] creating thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive [5m]
983 [main] DEBUG org.elasticsearch.common.network  - configuration:

lo
        inet 127.0.0.1 netmask:255.0.0.0 scope:host
        inet6 ::1 prefixlen:128 scope:host
        UP LOOPBACK mtu:16436 index:1

eth0
        inet 10.0.2.15 netmask:255.255.255.0 broadcast:10.0.2.255 scope:site
        inet6 fe80::a00:27ff:feb8:e83f prefixlen:64 scope:link
        hardware 08:00:27:B8:E8:3F
        UP MULTICAST mtu:1500 index:2

eth1
        inet 192.168.56.102 netmask:255.255.255.0 broadcast:192.168.56.255 scope:site
        inet6 fe80::a00:27ff:fe65:a25 prefixlen:64 scope:link
        hardware 08:00:27:65:0A:25
        UP MULTICAST mtu:1500 index:3

1033 [main] DEBUG org.elasticsearch.common.netty  - using gathering [true]
1096 [main] DEBUG org.elasticsearch.client.transport  - [Unseen] node_sampler_interval[5s]
1140 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil  - Using select timeout of 500
1140 [main] DEBUG org.elasticsearch.netty.channel.socket.nio.SelectorUtil  - Epoll-bug workaround enabled = false
1180 [main] DEBUG org.elasticsearch.client.transport  - [Unseen] adding address [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
1222 [elasticsearch[Unseen][management][T#1]] DEBUG org.elasticsearch.transport.netty  - [Unseen] connected to node [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]
1375 [elasticsearch[Unseen][transport_client_worker][T#1]{New I/O worker #1}] INFO org.elasticsearch.client.transport  - [Unseen] failed to get local cluster state for {#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}, disconnecting...
RemoteTransportException[[Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.state.ClusterStateResponse]]]; nested: TransportSerializationException[Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.state.ClusterStateResponse]]; nested: ExceptionInInitializerError; nested: IllegalArgumentException[An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'Lucene50' does not exist.  You need to add the corresponding JAR file supporting this SPI to your classpath.  The current classpath supports the following names: [es090, completion090, XBloomFilter]];
Caused by: TransportSerializationException[Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.state.ClusterStateResponse]]; nested: ExceptionInInitializerError; nested: IllegalArgumentException[An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'Lucene50' does not exist.  You need to add the corresponding JAR file supporting this SPI to your classpath.  The current classpath supports the following names: [es090, completion090, XBloomFilter]];
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:180)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:138)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ExceptionInInitializerError
    at org.elasticsearch.Version.fromId(Version.java:572)
    at org.elasticsearch.Version.readVersion(Version.java:312)
    at org.elasticsearch.cluster.node.DiscoveryNode.readFrom(DiscoveryNode.java:339)
    at org.elasticsearch.cluster.node.DiscoveryNode.readNode(DiscoveryNode.java:322)
    at org.elasticsearch.cluster.node.DiscoveryNodes.readFrom(DiscoveryNodes.java:594)
    at org.elasticsearch.cluster.node.DiscoveryNodes$Builder.readFrom(DiscoveryNodes.java:674)
    at org.elasticsearch.cluster.ClusterState.readFrom(ClusterState.java:699)
    at org.elasticsearch.cluster.ClusterState$Builder.readFrom(ClusterState.java:677)
    at org.elasticsearch.action.admin.cluster.state.ClusterStateResponse.readFrom(ClusterStateResponse.java:58)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:178)
    ... 23 more
Caused by: java.lang.IllegalArgumentException: An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'Lucene50' does not exist.  You need to add the corresponding JAR file supporting this SPI to your classpath.  The current classpath supports the following names: [es090, completion090, XBloomFilter]
    at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:114)
    at org.apache.lucene.codecs.PostingsFormat.forName(PostingsFormat.java:112)
    at org.elasticsearch.common.lucene.Lucene.<clinit>(Lucene.java:65)
    ... 33 more
1386 [elasticsearch[Unseen][transport_client_worker][T#1]{New I/O worker #1}] DEBUG org.elasticsearch.transport.netty  - [Unseen] disconnecting from [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}] due to explicit disconnect call
1397 [elasticsearch[Unseen][transport_client_worker][T#1]{New I/O worker #1}] WARN org.elasticsearch.transport.netty  - [Unseen] exception caught on transport layer [[id: 0x6324c4d8, /127.0.0.1:34158 :> localhost/127.0.0.1:9300]], closing connection
java.lang.IllegalStateException: Message not fully read (response) for requestId [0], handler [org.elasticsearch.client.transport.TransportClientNodesService$SniffNodesSampler$1$1@4f36d02], error [false]; resetting
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:146)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
1477 [main] INFO org.mongodb.driver.cluster  - Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='5000 ms', maxWaitQueueSize=500}
1480 [main] INFO org.mongodb.driver.cluster  - Adding discovered server localhost:27017 to client view of cluster
1594 [main] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]
1712 [main] INFO org.mongodb.driver.cluster  - No server chosen by ReadPreferenceServerSelector{readPreference=primaryPreferred} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=MULTIPLE, all=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 5000 ms before timing out
1735 [cluster-ClusterId{value='57ce8631330e1114f57cddda', description='null'}-localhost:27017] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:1, serverValue:6}] to localhost:27017
1736 [cluster-ClusterId{value='57ce8631330e1114f57cddda', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster  - Checking status of localhost:27017
1738 [cluster-ClusterId{value='57ce8631330e1114f57cddda', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster  - Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 0, 3]}, minWireVersion=0, maxWireVersion=3, maxDocumentSize=16777216, roundTripTimeNanos=2199767}
1745 [cluster-ClusterId{value='57ce8631330e1114f57cddda', description='null'}-localhost:27017] INFO org.mongodb.driver.cluster  - Discovered cluster type of STANDALONE
1747 [cluster-ClusterId{value='57ce8631330e1114f57cddda', description='null'}-localhost:27017] DEBUG org.mongodb.driver.cluster  - Updating cluster description to  {type=STANDALONE, servers=[{address=localhost:27017, type=STANDALONE, roundTripTime=2,2 ms, state=CONNECTED}]
1757 [main] INFO org.mongodb.driver.connection  - Opened connection [connectionId{localValue:2, serverValue:7}] to localhost:27017
1763 [main] DEBUG org.mongodb.driver.protocol.command  - Sending command {count : BsonString{value='product'}} to database valueable_dev on connection [connectionId{localValue:2, serverValue:7}] to server localhost:27017
1772 [main] DEBUG org.mongodb.driver.protocol.command  - Command execution completed
1772 [main] INFO com.kodcu.provider.MongoToElasticProvider  - Mongo collection count: 6
1774 [main] INFO com.kodcu.main.Mongolastic  - Load duration: 1771ms
Exception in thread "main" NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{localhost/127.0.0.1:9300}]]
    at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:290)
    at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:207)
    at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
    at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:288)
    at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359)
    at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1226)
    at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:86)
    at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:56)
    at com.kodcu.service.ElasticBulkService.dropDataSet(ElasticBulkService.java:94)
    at com.kodcu.provider.Provider.transfer(Provider.java:22)
    at com.kodcu.main.Mongolastic.proceedService(Mongolastic.java:61)
    at java.util.Optional.ifPresent(Optional.java:159)
    at com.kodcu.main.Mongolastic.start(Mongolastic.java:50)
    at com.kodcu.main.Mongolastic.main(Mongolastic.java:38)
root@wheezy:/home/jp/Téléchargements#

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.