hazelcast / hazelcast-python-client Goto Github PK
View Code? Open in Web Editor NEWHazelcast Python Client
Home Page: https://hazelcast.com/clients/python/
License: Apache License 2.0
Hazelcast Python Client
Home Page: https://hazelcast.com/clients/python/
License: Apache License 2.0
handle_error is assuming that all errors it gets are IOError.
There is a type check needed in handle_error
There are user reports than it can get other errors there like following:
'Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hazelcast/reactor.py", line 37, in _loop
asyncore.loop(count=1000, timeout=0.01, map=self._map)
File "/usr/lib/python2.7/asyncore.py", line 220, in loop
poll_fun(timeout, map)
File "/usr/lib/python2.7/asyncore.py", line 156, in poll
read(obj)
File "/usr/lib/python2.7/asyncore.py", line 87, in read
obj.handle_error()
File "/usr/local/lib/python2.7/dist-packages/hazelcast/reactor.py", line 129, in handle_error
if error.errno != errno.EAGAIN and error.errno != errno.EDEADLK:
AttributeError: \'exceptions.KeyError\' object has no attribute \'errno\''
It seems that python client is always opens owner connection with empty uuid's. This causes server to give new uuid to it every time owner connection is broken.
Since a new endpoint can not be registered to server for an existing connection. Old endpoint will be taking into account on server. And this result with endpoint deleted and connection closed second time.
Instead, pyhton client should always send existing uuid's to server when authenticating.
Right now, configuration properties are set with constant strings that are defined on top of the module they are being used. This causes properties to scatter all over the code and makes it hard for user to import them one by one.
There should be a common class that holds all the available configuration properties to deal with that problem.
Current behaviour: It clears the last known memberlist in reconnect here:
https://github.com/hazelcast/hazelcast-python-client/blob/master/hazelcast/cluster.py#L220
We expect a test like following to pass normally:
def test_reconnect_toNewNode_ViaLastMemberList(self):
old_member = self.cluster.start_member()
config = ClientConfig()
config.network_config.addresses.append("127.0.0.1:5701")
config.network_config.smart_routing = False
config.network_config.connection_attempt_limit = 100
client = self.create_client(config)
new_member = self.cluster.start_member()
old_member.shutdown()
def assert_member_list():
self.assertEqual(1, len(client.cluster.members))
self.assertEqual(new_member.uuid, client.cluster.members[0].uuid)
self.assertTrueEventually(assert_member_list)
We are using the latest version of the repository . If we clone and use the client it is working but not working if we install using pip.
I repeatedly get the below error while connecting to the cluster member:
017-02-13 13:21:03,890 WARNING cluster _connect_to_cluster:119 6068 140173885757184 Error connecting to Address(host=localhost, port=5701), attempt 2 of 2, trying again in 3 seconds
Traceback (most recent call last):
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/cluster.py", line 115, in _connect_to_cluster
self._connect_to_address(address)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/cluster.py", line 145, in _connect_to_address
f = self._client.connection_manager.get_or_connect(address, self._authenticate_manager)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/connection.py", line 114, in get_or_connect
future = authenticator(connection).continue_with(self.on_auth, connection, address)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/cluster.py", line 142, in _authenticate_manager
return self._client.invoker.invoke_on_connection(request, connection).continue_with(callback)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/invocation.py", line 135, in invoke_on_connection
return self.invoke(Invocation(message, connection=connection), ignore_heartbeat)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/invocation.py", line 148, in invoke_smart
self._send(invocation, invocation.connection, ignore_heartbeat)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/invocation.py", line 223, in _send
connection.send_message(message)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/connection.py", line 282, in send_message
message.add_flag(BEGIN_END_FLAG)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/protocol/client_message.py", line 203, in add_flag
self.set_flags(self.get_flags() | flags)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/protocol/client_message.py", line 85, in get_flags
return struct.unpack_from(FMT_LE_UINT8, self.buffer, FLAGS_FIELD_OFFSET)[0]
TypeError: unpack_from() argument 1 must be string or read-only buffer, not bytearray
Clients and members were forced to configure the same interface when adding a member address to config.
hazelcast/hazelcast#14104
With this prd on the java side, we have relaxed the rules. As long as hostnames are resolved to the same IP when DNS lookup made on the client side following setups are allowed.
Same needs to be applied to pyhton client
Python client should be able to discover hazelcast nodes in hazelcast.cloud using discovery token
See original issue on java hazelcast/hazelcast#12733
Java side pr hazelcast/hazelcast#13433
This is what we do on the server. We need to provide the same level of version information on the clients.
INFO: [10.0.0.2]:5701 [dev] [3.6-EA2] Hazelcast 3.6-EA2 (20151125 - 8af85c5) starting at Address[10.0.0.2]:5701
This behavior has changed in hazelcast 3.12. For clients to be able to work with 3.12 members and hot-restart feature, the fix should be applied to all clients.
issue hazelcast/hazelcast#14839
fix hazelcast/hazelcast#14844
test hazelcast/hazelcast#14839
Hi All,
I trying to use map with simple set and get query.
from hazelcast.serialization.predicate import sql
config = hazelcast.ClientConfig()
config.network_config.addresses.append('127.0.0.1')
identifiedDataSerializable_factory = {Customer.CLASS_ID: Customer}
config.serialization_config.add_data_serializable_factory(FACTORY_ID, identifiedDataSerializable_factory)
client = hazelcast.HazelcastClient(config)
map = client.get_map("customer_map").blocking()
print("map.put", map.put(2, Customer(2, "test", "test")))
print("map.get", map.get(2))
After map.get is running i get error "cannot read 1 bytes!"
I try also to run map with sql
map.values(sql("name='test'")).add_done_callback(values_callback)
and i receive error message
hazelcast.exception.HazelcastSerializationError: No DataSerializerFactory registered for namespace: 1
Appreciate your advise,
Thanks,
Niv
Hi, I'm new to hazelcast and trying out a few simple operations. The behaviour of the map is surprising me:
In [15]: k = my_map.key_set()[0]
In [16]: k
Out[16]: 964660
In [17]: my_map.get(k)
In [18]:
I note the warning in the API docs:
Warning: This method uses hash and eq methods of binary form of the key, not the actual implementations of hash and eq defined in key’s class.
The keys of the map are Long
s, being populated by a Java process. How should I retrieve a value by key?
Hi,
When subscribing for updates on a map, I'd like to not receive the old value to save the serialization overhead. As far as I can see from the docs, the include_value=False
flag should allow me to do this [1], however when setting this to false I don't see the new value either.
def recv_event(event: EntryEvent):
print(type(event.value), type(event.old_value))
map.add_entry_listener(include_value=False, key=fid, added_func=recv_event, updated_func=recv_event)
# prints <class 'NoneType'> <class 'NoneType'>
map.add_entry_listener(include_value=True, key=fid, added_func=recv_event, updated_func=recv_event)
# prints <class 'dict'> <class 'dict'>
[1] http://hazelcast.github.io/hazelcast-python-client/3.10/hazelcast.proxy.map.html#hazelcast.proxy.map.Map.add_entry_listener
include_value – (bool), whether received events include an old value or not (optional).
Operations on client sometimes throws ValueError: Address translator could not translate address: None
when when Hazelcast Cloud is enabled.
Example stack trace
my_map.put(i, self.test_value.get_random_byte())
File "/usr/local/lib/python2.7/site-packages/hazelcast_python_client-3.10-py2.7.egg/hazelcast/proxy/map.py", line 511, in put
return self._put_internal(key_data, value_data, ttl)
File "/usr/local/lib/python2.7/site-packages/hazelcast_python_client-3.10-py2.7.egg/hazelcast/proxy/map.py", line 855, in _put_internal
ttl=to_millis(ttl))
File "/usr/local/lib/python2.7/site-packages/hazelcast_python_client-3.10-py2.7.egg/hazelcast/proxy/base.py", line 65, in _encode_invoke_on_key
return self._encode_invoke_on_partition(codec, partition_id, invocation_timeout=invocation_timeout, **kwargs)
File "/usr/local/lib/python2.7/site-packages/hazelcast_python_client-3.10-py2.7.egg/hazelcast/proxy/base.py", line 70, in _encode_invoke_on_partition
return self._client.invoker.invoke_on_partition(request, _partition_id, invocation_timeout).continue_with(response_handler,
File "/usr/local/lib/python2.7/site-packages/hazelcast_python_client-3.10-py2.7.egg/hazelcast/invocation.py", line 154, in invoke_on_partition
return self.invoke(invocation)
File "/usr/local/lib/python2.7/site-packages/hazelcast_python_client-3.10-py2.7.egg/hazelcast/invocation.py", line 167, in invoke_smart
self._send_to_address(invocation, addr)
File "/usr/local/lib/python2.7/site-packages/hazelcast_python_client-3.10-py2.7.egg/hazelcast/invocation.py", line 223, in _send_to_address
self._client.connection_manager.get_or_connect(address).continue_with(self.on_connect, invocation,
File "/usr/local/lib/python2.7/site-packages/hazelcast_python_client-3.10-py2.7.egg/hazelcast/connection.py", line 109, in get_or_connect
raise ValueError("Address translator could not translate address: {}".format(address))
ValueError: Address translator could not translate address: None
Probably related to #76
Currently, if any request takes longer than 120 seconds, the following exception gets raised:
TimeoutError: Request timed out after 120 seconds.
This appears to be controlled by the invocation.INVOCATION_TIMEOUT
constant and does not appear tunable. I see that the constructor for Invocation
takes in a timeout
parameter that defaults to that constant value, but as far as I can tell there's no way to get a timeout actually passed-in to the Invocation
object since that's all handled under-the-hood when the client is configured.
I would love it if I could control the timeout on a per-request basis.
under split brain / member disconnection conditions the client throws AttributeError
File "/home/ec2-user/hz-root/hz-py-lib/hzPyClient/MapGet.py", line 12, in timeStep
val = self.map.get(k)
File "/usr/local/lib/python2.7/site-packages/hazelcast/future.py", line 271, in f
result = inner(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/hazelcast/proxy/map.py", line 341, in get
return self._get_internal(key_data)
File "/usr/local/lib/python2.7/site-packages/hazelcast/proxy/map.py", line 816, in _get_internal
return self._encode_invoke_on_key(map_get_codec, key_data, key=key_data, thread_id=thread_id())
File "/usr/local/lib/python2.7/site-packages/hazelcast/proxy/base.py", line 62, in _encode_invoke_on_key
return self._encode_invoke_on_partition(codec, partition_id, **kwargs)
File "/usr/local/lib/python2.7/site-packages/hazelcast/proxy/base.py", line 66, in _encode_invoke_on_partition
return self._client.invoker.invoke_on_partition(request, _partition_id).continue_with(response_handler, codec,
File "/usr/local/lib/python2.7/site-packages/hazelcast/invocation.py", line 138, in invoke_on_partition
return self.invoke(Invocation(message, partition_id=partition_id))
File "/usr/local/lib/python2.7/site-packages/hazelcast/invocation.py", line 151, in invoke_smart
self._send_to_address(invocation, addr)
File "/usr/local/lib/python2.7/site-packages/hazelcast/invocation.py", line 194, in _send_to_address
self._client.connection_manager.get_or_connect(address).continue_with(self.on_connect, invocation, ignore_heartbeat)
File "/usr/local/lib/python2.7/site-packages/hazelcast/connection.py", line 110, in get_or_connect
message_callback=self._client.invoker._handle_client_message)
File "/usr/local/lib/python2.7/site-packages/hazelcast/reactor.py", line 91, in new_connection
message_callback)
File "/usr/local/lib/python2.7/site-packages/hazelcast/reactor.py", line 114, in __init__
Connection.__init__(self, address, connection_closed_callback, message_callback)
File "/usr/local/lib/python2.7/site-packages/hazelcast/connection.py", line 259, in __init__
self._address = (address.host, address.port)
AttributeError: 'NoneType' object has no attribute 'host'
Client connection should detect that a connection is reading but not sending any data to server to detect heartbeat idle case.
Current implementation checks only last read time which is not enough for reading only connections like just simple listener use case.
Issue from java client: hazelcast/hazelcast#13576
Fix for java client : hazelcast/hazelcast#13577
we also found funky behavior with the queue.take. Sometimes, after the 120s timeout, further messages
get dequeued by take() but calling result() on the corresponding future blocks indefinitely. This usually occurs with the first or second message sent (offered) after the timeout.
(reported by @vsaldarriaga )
Hi
I have a Python 3 producer so I can't use the client so REST is used with that, consumer is on top of this client and does not seems to read items added added from Rest API.
It reads them as IdentifiedDataSerializable.
I have not been able to override it, or find any similar issues.
Thanks in advance
Edit: After digging into Hazelcast code and this library I guess this is not meant to be used like that.
I created a small override that I could use, only works one way but I don't need other way to work.
Also if you guys might be interested, the gist.
When a client waiting for a lock to be released more than invocation timeout seconds, and the member dies, clients get operation timeout exception rather than it is redirected to another member.
issue from java client hazelcast/hazelcast#13551
fix to java client hazelcast/hazelcast#13552
Missing key-checks in MapFeatNearCache when invalidations occur. If keys don't exist in the NearCache a KeyError
is thrown.
Ever since we added the HZ client to one of our components, CPU usage of that component increased significantly. We could track it down to the select
syscall, which is used by asyncore.loop
called by AsyncoreReactor.
CPU usage could be reduced by changing that call to loop(count=1, timeout=0.1)
.
Our application itself makes heavy use of asyncore, so running two threads in one process, each with its own asyncore main loop in it, doesn't really make much sense. We'd appreciate being able to have the HZ client use our main loop, instead of living on its own thread. Would that be feasible?
Given the following scenario:
If the Hazelcast member gets disconnected or crashes, the Python client attempts to reconnect a few times (refer to the function ClusterService._reconnect at cluster.py) and if the Hazelcast member goes back online soon enough, the Python client nicely reconnects.
The problem is that quite possibly, the Hazelcast member ID changes after being restarted and for some reason this change does not get reflected in the list of members kept by the Python client (specifically, ClusterService.members). Thus, if we get a message say, on a queue, and we attempt to get the member that delivered the message through:
hz_python_client.cluster.get_member_by_uuid(cur_msg.issuingMemberId)
We will get None, even though cur_msg.issuingMemberId is part of the cluster.
I'm not really sure why, but the issue gets fixed by clearing the list of members before attempting to reconnect. We did so by modifying the function ClusterService._reconnect like this:
def _reconnect(self):
try:
self.logger.warn("Connection closed to owner node. Trying to reconnect.")
del self.members[:]
self._connect_to_cluster()
This slight modification causes the event ClusterService_handle_member to be triggered immediately after reconnecting, which in turn properly updates the members list. Certainly, this looks more as a hack than as a nice solution. Could you please check it out?
Thank you!
Partition count refresher does not respect connection shutdown.
Hi
I'm wondering if there is any way around the following code to get the state of a map, and subscribe for updates:
for t in self._mymap.values(predicate):
on_update(t)
registration_id = self._mymap.add_entry_listener(include_value=True,
added_func=on_update,
predicate=predicate)
In particular, how to avoid a race condition here?
This caused problems in java client.
When heartbeat interval value is lower than connection timeout, it could be the case that hearbeat closes connection before it gets first answer from user. As a solution we initialized last_read_time to current time when connection first constructed.
In https://github.com/hazelcast/hazelcast-python-client/blob/master/hazelcast/protocol/codec/map_put_all_codec.py#L15 this method misses 4 bytes that represent size of following list.
There should be an additional
data_size += INT_SIZE_IN_BYTES
def encode_request(class_definitions):
""" Encode request into client_message"""
client_message = ClientMessage(payload_size=calculate_size(class_definitions))
client_message.set_message_type(REQUEST_TYPE)
client_message.set_retryable(RETRYABLE)
client_message.append_int(len(class_definitions))
for class_definitions_item in class_definitions: #This is an empty for loop.
client_message.update_frame_length()
return client_message
In encode_request method the generated code does not generate the for loop.
The named arguments are defined as key_data
in MapFeatNearCache._handle_invalidation
or key_data_list
in MapFeatNearCache._handle_batch_invalidation but are referenced as key
and keys
(respectively) in map_add_near_cache_entry_listener_codec.py causing a TypeError
at runtime
Related stack trace. Since user invocation cancellation is expected behaviour we should not see Empty error .
2017-02-17 02:59:45,158 [ERROR] [ 12] [-:-] reactor.py:44 'Error in Reactor Thread'
'Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/hazelcast/reactor.py", line 38, in _loop
self._check_timers()
File "/usr/local/lib/python2.7/dist-packages/hazelcast/reactor.py", line 58, in _check_timers
self._timers.get_nowait()
File "/usr/lib/python2.7/Queue.py", line 190, in get_nowait
return self.get(False)
File "/usr/lib/python2.7/Queue.py", line 165, in get
raise Empty
Empty'
In Python 3.5, await and async added to the language as identifiers and in Python 3.7 they become reserved keywords.
In CountDownLatch class, there is a method named await which causes a SyntaxError in Python 3.7. This method should be renamed to support Python 3.7.
Python client should support TLS with mutual authentication to allow encrypted socket level communication.
master (commit 4e017c5)
Test tests.heartbeat_test.HeartbeatTest.test_heartbeat_stopped
fails on Windows: https://hazelcast-l337.ci.cloudbees.com/view/Official%20Builds/job/python-client-windows/32/testReport/tests.heartbeat_test/HeartbeatTest/test_heartbeat_stopped/
Stacktrace:
Traceback (most recent call last):
File "C:\Python27\lib\unittest\case.py", line 329, in run
testMethod()
File "C:\Jenkins\workspace\python-client-windows\tests\heartbeat_test.py", line 77, in test_heartbeat_stopped
self.assertTrueEventually(assert_heartbeat_stopped_and_restored)
File "C:\Jenkins\workspace\python-client-windows\tests\base.py", line 60, in assertTrueEventually
assertion()
File "C:\Jenkins\workspace\python-client-windows\tests\heartbeat_test.py", line 70, in assert_heartbeat_stopped_and_restored
self.assertEqual(1, len(stopped_collector.connections))
File "C:\Python27\lib\unittest\case.py", line 513, in assertEqual
assertion_func(first, second, msg=msg)
File "C:\Python27\lib\unittest\case.py", line 506, in _baseAssertEqual
raise self.failureException(msg)
AssertionError: 1 != 0
I'm trying to access an Hazelcast map through a python client. I'm trying to execute file https://github.com/hazelcast/hazelcast-python-client/blob/master/examples/main.py
I get the following error: ImportError: No module named 'Queue'
I'm running python 3.5.2. It seems that the python client is not made for python 3 .....
About 30% of the time, when launching a local Python application that has been configured to connect to two local servers in a cluster, I get the following error after which the application hangs:
Traceback (most recent call last):
File "venv\lib\site-packages\hazelcast\reactor.py", line 38, in _loop
self._check_timers()
File "venv\lib\site-packages\hazelcast\reactor.py", line 60, in _check_timers
self._timers.get_nowait()
File "C:\Program Files (x86)\Python37-32\lib\queue.py", line 198, in get_nowait
return self.get(block=False)
File "C:\Program Files (x86)\Python37-32\lib\queue.py", line 180, in get
item = self._get()
File "C:\Program Files (x86)\Python37-32\lib\queue.py", line 236, in _get
return heappop(self.queue)
TypeError: '<' not supported between instances of 'Timer' and 'Timer'
In addition, if I am able to start the application and connect successfully, the application will run for some time without issue until it eventually runs into this error once again. At such a time, the only fix is to forcibly stop the application and relaunch it.
I am using Python 3.7 with hazelcast-python-client
version 3.9.
My HazelCast client + config is pretty vanilla (copied from the example code), and looks like this:
import hazelcast
class HazelcastClient:
def __init__(self):
self.client = self._configure_cluster()
def _configure_cluster(self):
config = hazelcast.ClientConfig()
print("Cluster name: {}".format(config.group_config.name))
config.network_config.addresses.append("127.0.0.1:5701")
config.network_config.addresses.append("127.0.0.1:5702")
client = hazelcast.HazelcastClient(config)
print("Client is {}".format(client.lifecycle.state))
return client
When I set the in-memory-format to OBJECT, I get the below error. My value is basically a python dictionary:
2017-02-13 16:58:29,070 ERROR precal_consumer handle_message:162 8527 140214979946240 Got an exception while processing events <type 'collections.defaultdict'>
Traceback (most recent call last):
File "/opt/ns/bin/precal/precal_consumer.py", line 156, in handle_message
self.store_to_hazelcast_cache(query_obj, result)
File "/opt/ns/bin/precal/precal_consumer.py", line 228, in store_to_hazelcast_cache
hazelcast_report.put(mongo_id.key_to_hashid(group_key), values_dict)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/proxy/map.py", line 499, in put
value_data = self._to_data(value)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/serialization/base.py", line 76, in to_data
handle_exception(sys.exc_info()[1], sys.exc_info()[2])
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/serialization/base.py", line 68, in to_data
serializer = self._registry.serializer_for(obj)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/serialization/base.py", line 205, in serializer_for
serializer = self.lookup_custom_serializer(obj_type)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/serialization/base.py", line 258, in lookup_custom_serializer
serializer = self.register_from_super_type(obj_type, super_type)
File "/opt/ns/nsenv/local/lib/python2.7/site-packages/hazelcast/serialization/base.py", line 300, in register_from_super_type
serializer = self._type_dict[super_type]
HazelcastSerializationError: <type 'collections.defaultdict'>
https://github.com/hazelcast/hazelcast-python-client/blob/master/hazelcast/cluster.py#L103
I think there is a bug here, current_attempt should be increased after trying all addresses, not after each one. Also according to the java client possible addresses should be called each time it fails to connect to cluster after trying all possible addresses. So it should be called attemp_limit times I guess.
I am trying to run python client program using python 2.7.6 and I am getting this exception-
Traceback (most recent call last):
File "main.py", line 6, in
from hzrc.client import HzRemoteController
ImportError: No module named hzrc.client
Hi,
As per below document we use to get map statistics in java :
http://docs.hazelcast.org/docs/latest-development/manual/html/Management/Getting_Member_Statistics/Map_Statistics.html#page_Map+Statistics
We are implementing now the Python clien t for hazelcast cluster but not able find api for map statistics in python, please let us know is there anything lke that to get map statistics in pyhton by python-client.
Java --> LocalMapStats mapStatistics = customers.getLocalMapStats;
Python ----> ???
Regards,
Sreenath
Reproducer:
1 - Start a Hazelcast Member
2 - Run below Java code (Java 8 needed):
public static void main(String[] args) {
System.setProperty("hazelcast.logging.type", "none");
HazelcastInstance hz = HazelcastClient.newHazelcastClient();
try {
IMap<Object, Object> test = hz.getMap("test");
if(test.size() <= 10_000) {
byte[] data = new byte[1024];
ThreadLocalRandom.current().nextBytes(data);
fillMap(test, 10_000, data);
}
System.out.println("Map Size = " + test.size());
System.out.println(Instant.now());
LongAdder tmp = new LongAdder();
long count = test.entrySet()
.stream()
.peek(e -> tmp.add(e.getValue().hashCode() + e.getKey().hashCode()))
.count();
System.out.println("Entries Iterated = " + count);
System.out.println(Instant.now());
System.out.println(tmp.longValue());
} finally {
Hazelcast.shutdownAll();
}
}
private static void fillMap(IMap<Object, Object> imap, int size, Object data) {
imap.clear();
IntStream.range(0, size).forEach(i -> imap.put(i, data));
}
Result of this operation is below, on my laptop took about 60-70 ms:
Map Size = 10000
2019-02-11T21:20:57.024Z
Entries Iterated = 10000
2019-02-11T21:20:57.088Z
10846178286848
3 - Run below Python client code:
# -*- coding: utf-8 -*-
"""
Created on Mon Jan 14 09:11:59 2019
@author:
"""
import hazelcast, logging
from datetime import datetime
HC_ADDRESS = ['localhost:5701']
MAP_NAME = 'test'
def client_startup():
config = hazelcast.ClientConfig()
for addr in HC_ADDRESS:
config.network_config.addresses.append(addr)
# basic logging setup to see client logs
logging.basicConfig()
logging.getLogger().setLevel(logging.INFO)
client = hazelcast.HazelcastClient(config)
return client
def client_shutdown(client):
client.shutdown()
def test_sync(client):
print('USING SYNC...')
# connect to map
hc_map = client.get_map(MAP_NAME).blocking()
# print map size
print('map size: {}'.format(hc_map.size()))
# test with iterator
print('WITH ITERATOR...')
print('{} start'.format(str(datetime.now())))
es = iter(hc_map.entry_set())
es_list = [e for e in es]
print('{} stop'.format(str(datetime.now())))
print('list length: {}'.format(len(es_list)))
# test without iterator
print('\nWITHOUT ITERATOR...')
print('{} start'.format(str(datetime.now())))
es = hc_map.entry_set()
es_list = [e for e in es]
print('{} stop'.format(str(datetime.now())))
print('list length: {}'.format(len(es_list)))
def test_async(client):
print('USING ASYNC...')
# connect to map
hc_map = client.get_map(MAP_NAME)
# print map size
print('map size: {}'.format(hc_map.size().result()))
# test with iterator
print('WITH ITERATOR...')
print('{} start'.format(str(datetime.now())))
es = iter(hc_map.entry_set().result())
es_list = [e for e in es]
print('{} stop'.format(str(datetime.now())))
print('list length: {}'.format(len(es_list)))
# test without iterator
print('\nWITHOUT ITERATOR...')
print('{} start'.format(str(datetime.now())))
es = hc_map.entry_set().result()
es_list = [e for e in es]
print('{} stop'.format(str(datetime.now())))
print('list length: {}'.format(len(es_list)))
if __name__ == '__main__':
client = client_startup()
print('\n--------------------------------------\n')
test_sync(client)
print('\n--------------------------------------\n')
test_async(client)
client_shutdown(client)
print('process done...')
Same operation here took around 1.5 seconds:
USING SYNC...
map size: 10000
WITH ITERATOR...
2019-02-11 13:24:37.506658 start
2019-02-11 13:24:41.732174 stop
list length: 10000
WITHOUT ITERATOR...
2019-02-11 13:24:41.732207 start
2019-02-11 13:24:43.075291 stop
list length: 10000
--------------------------------------
USING ASYNC...
map size: 10000
WITH ITERATOR...
2019-02-11 13:24:43.076840 start
2019-02-11 13:24:44.318692 stop
list length: 10000
WITHOUT ITERATOR...
2019-02-11 13:24:44.318747 start
2019-02-11 13:24:45.742133 stop
list length: 10000
4 - If you change the data from byte[]
to String
by changing fillMap(test, 10_000, data)
line to fillMap(test, 10_000, new String(data, StandardCharsets.UTF_8))
, then Java client took 90-100 ms to run the same code but Python code took much longer, around 25 sec:
USING SYNC...
map size: 10000
WITH ITERATOR...
2019-02-11 13:27:57.039696 start
2019-02-11 13:28:30.277160 stop
list length: 10000
WITHOUT ITERATOR...
2019-02-11 13:28:30.277190 start
2019-02-11 13:28:54.060542 stop
list length: 10000
--------------------------------------
USING ASYNC...
map size: 10000
WITH ITERATOR...
2019-02-11 13:28:54.062131 start
2019-02-11 13:29:19.383447 stop
list length: 10000
WITHOUT ITERATOR...
2019-02-11 13:29:19.383481 start
2019-02-11 13:29:43.178533 stop
list length: 10000
I've been working on making the client compatible with both Python 2 and 3 with respect the issue #29
I feel it would be better if we can migrate the client to Python 3 instead of having Python 2 and 3 compatibility which is really tedious process (#72).
We can have the Python 3 version of the client on a different branch altogether so that it doesn't affect the current master code and also helps people in choosing the version they wanted to use.
Would like to know the thoughts of the community on this.
Thank You.
Example 3-read_from_a_map.py is currently failing on master and tag v3.9.
list(greetings_map.key_set().result()) = [None, None, None, None, None]
Root cause is that we are not returning val if it is not None.
class ImmutableLazyDataList(Sequence):
def __getitem__(self, index):
val = self._list_obj[index]
if not val:
data = self._list_data[index]
if isinstance(data, tuple):
(key, value) = data
self._list_obj[index] = (self.to_object(key), self.to_object(value))
else:
self._list_obj[index] = self.to_object(data)
return self._list_obj[index]
## this should be unindented
return self._list_obj[index]
User configured intervals less than 10 seconds not respected when there is no interaction with server.
Issue is found when setting heartbeat interval to 1 sec and heartbeat timeout to 4 seconds.
Because client wakes up and runs heartbeat check every 10 seconds. When there no communication client always complains about heartbeat
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.