seattletestbed / affix Goto Github PK
View Code? Open in Web Editor NEWTransparent network call wrapping for augmented functionality
License: MIT License
Transparent network call wrapping for augmented functionality
License: MIT License
The current NAT Decider Affix only checks to see if the local IP address falls in the private IP address category. However this is not sufficient, we need a better method of detecting if the node is behind NAT.
One method of doing this is to have the local node contact an external service, and have the external service try and connect to the node. If it is successful, then the local node is not behind NAT.
Currently the RSAAffix generates a new set of public/private key pair for every connection. We need to modify it a little bit to allow users to provide their own set of keys, if they want to.
Currently all the existing unit tests test each AFFIX component with the core AFFIX framework as a whole. However we need some smaller and more specific tests for just the core AFFIX code to ensure that the core code is solid and doesn't break.
The current CoordinationAffix in monzum/master-code still uses advertise_objects
despite this was replaced by advertisepipe
a while ago in Seattle's r7220. This obviously doesn't make sense.
When a timeout is provided for openconnection, coordination AFFIX passes a negative timeout value to the Repy openconnection() function. This needs to be fixed in coordination AFFIX. Below is a traceback
Traceback (most recent call last):
File "test_simple_affix_stack.py", line 84, in <module>
main()
File "test_simple_affix_stack.py", line 60, in main
sockobj = new_affix_obj.openconnection(server_address, server_port, getmyip(), server_port + 1, 10)
File "affixstackinterface", line 194, in openconnection
File "coordinationaffix", line 252, in openconnection
File "affixstackinterface", line 194, in openconnection
File "baseaffix", line 252, in openconnection
File "affix_wrapper_lib", line 169, in openconnection
File "/home/monzum/exdisk/work/affix_library/emulcomm.py", line 1272, in openconnection
raise RepyArgumentError("Provided timeout is not valid, must be positive! Timeout: "+str(timeout))
exception_hierarchy.RepyArgumentError: Provided timeout is not valid, must be positive! Timeout: -10.0169849396
Currently there are conflicting versions of certain files in different directories in the github repository (e.g., from Affix repo and from Seattle libraries). This should really be cleaned up. At the very least, there should be a README that describes the order of precedence in which to copy the various files to a new directory.
Perhaps - since all files need to be in the same directory or Python path to run, anyways - we should consider a flat directory structure in the github repository.
If the server is using the AFFIX framework and if it either fails to advertise the AFFIX string or if the client is unable to perform advertise lookup for the server node, the client believes that the server is a legacy node and will try to contact it using an empty stack. This could cause the message being sent and the message being received to be a mismatch and neither the server or client would know any better.
Example: Server uses AsciiShiftingAffix and successfully announces the AFFIX string using the CoordinationAffix. Client uses CoordinationAffix to do a lookup for the server's AFFIX string but gets an empty list back. Thus client believes that server is using a legacy connection and uses an empty stack. Due to the AFFIX stack being imbalanced on both sides, the application on the server side will receive a garbled message.
Is this the desired effect that we want?
We need to test the Mobility Affix more thoroughly and ensure that it is working correctly and it does not produce any unexpected output.
Currently when any internal error occurs in the AFFIX stack or in the AFFIX interface, an internal error is raised. Instead we should have a logging mechanism to log any warnings and errors in detail.
We need a very simple way of allowing Python developers to import the AFFIX framework into Python without having to use Repy. We need a library that overloads all network calls such that all network calls will use the AFFIX framework instead of making direct network calls.
Minor issue here, we are using the variable socket
all over the place, but in that place below, it's called sockobj
except for a debug message...
"affix_wrapper_lib.r2py", line 263, in tcpserversocket_getconnection
Exception (with type 'exceptions.NameError'): global name 'socket' is not defined
Refactoring to use socket
, fix will be ready in an instant.
Previously the CoordinationAffix had a thread that constantly advertised the zenodotus name along with the localip of the node. This thread was started when the user first called listenforconnection(). This thread had a secondary purpose of re-advertising, if the local ip address of the node changed.
With the recent changes to advertise_objects.repy, and it being split into advertisepipe and cachedadvertise, we will be using the advertisepipe to advertise the zenodotus name instead of having a dedicated thread for it. However this loses it's secondary purpose of re-advertising a different ip address if the address changes. This is not an issue for the Seattle nodemanager as the nodemanager resets and creates a new listening socket when the local ip address changes. However this may be an issue for generic application.
The purpose of this advertisement is solely for legacy clients that are not using Affix that may be trying to connect to a server using the zenodotus name. Should this re-advertisement still remain in the CoordinationAffix? Or should it be moved elsewhere (perhaps to something like the MobilityAffix)
The copy() method in AffixStack uses peek() to perform some copy and may copy over some references. This needs to be changed to ensure that copy has no reference to the old stack.
I'm writing an Affix component that should have its optional_args
set to some default values. When I build the stack from an Affix string with no arguments like so,
server_affix_object = AffixStackInterface("(MakeMeHearAffix)")
then MakeMeHearAffix
's constructor is called with an empty list for the optional_args
, thus overwriting my default values. Clearly, I can work around this and set the defaults to be active if []
is passed in place of proper optional arguments, but I would prefer that the familiar Pythonic way Just Worked (TM).
The MakeMeHearAffix queries the CanIHear (formerly CanIListen) service to connect to the node and see if it is firewalled or NATted. It needs to do a better job of handling CanIHearError
s like the below, as reported by @yyzhuang: In case of an unresolvable error from the service, just assume the node needs NAT traversal.
It's probably indicated to also extend the CanIHear code to use more diverse exceptions to make error signaling more accurate.
Traceback from the nodemanager log:
1459867033.32:PID-14313:Running canilisten test for local address b27411dec9c508b5c71a70910bf8a35a08caff85 port 64310
1459867034.39:PID-14313:[ERROR] setting up nodemanager serversocket on address b27411dec9c508b5c71a70910bf8a35a08caff85:64310: AddressBindingError("MakeMeHearAffix: Error looking up or contacting canilisten servers. Exception encountered: CanIListenError('Did not find any servers advertising canilisten',)",)
1459867034.39:PID-14313:Traceback (most recent call last):
File "/storage/emulated/0/Android/data/com.sensibility_testbed/files/sl4a/seattle/seattle_repy/nmmain.py", line 366, in start_accepter
File "/storage/emulated/0/Android/data/com.sensibility_testbed/files/sl4a/seattle/seattle_repy/nmmain.py", line 311, in new_timeout_listenforconnection
File "coordinationaffix.r2py", line 93, in listenforconnection
File "makemehearaffix.r2py", line 134, in listenforconnection
AddressBindingError: MakeMeHearAffix: Error looking up or contacting canilisten servers. Exception encountered: CanIListenError('Did not find any servers advertising canilisten',)
The existing documentation on Affix is out of date, and not all of the existing code is helpful to get started. The Seattle Wiki has a couple of old "shim" documentation that might or might not be relevant to the current state of Affix.
On a high level, https://seattle.poly.edu/wiki/WritingShims-RepyV2 goes in the right direction I think, i.e. showing how an application can be amended to include Affixes, and then building one's own Affix.
Find my suggestions on a reworked version over here,
When the Zenodotus service is unavailable or the sought Zenodotus name does not exist (yet), affix_wrapper_lib
's attempts to gethostbyname()
the Zenodotus name fail. Lookup-before-registered race conditions like this one have caused trouble in Seattle before, see https://seattle.poly.edu/ticket/1385 .
As per #43 for the CoordinationAffix, the affix_wrapper_lib
should use the new cachedadvertise
and advertise_pipe
libraries and possibly natdecideraffix
's resolve logic to avoid going through actual DNS lookups of purportedly self-announced Zenodotus names.
TCPRelayAffix's forwarding function is responsible for relaying data between a server (host registered for "listening" via this relay) and a client (node connecting to said server). It has a bug that makes it buffer data until it detects that the socket connecting it and either the server or the client is closed. Only then it actually relays the received data to the other party.
This renders the relay barely usable for connections that employ timeouts (as buffering might consume much of that), and totally useless for connections exchanging more than one message at once.
A fix should ensure that the minimum sane amount of data is buffered on each read (e.g. one Ethernet frame's worth), and that it is sent out ASAP to the other side.
Keep in mind corner cases such as one side sending data then immediately closing the connection. From its TCP stack's perspective, the data has been delivered to the remote stack. In fact, that remote stack is only the relay, not the actual destination host of the data. The relay should then not read more of the data the destination sent, still deliver the remaining source data, and close the connection as soon as all the source data is delivered.
Currently the method of escaping the AFFIX framework and make actual network calls is located in BaseAffix. However this means that every stack must have at least one AFFIX component that inherits from the BaseAffix in order to make the network calls. This should be changed such that the RepyAPI wrapper is inserted into the bottom of the stack as an AFFIX component rather then it being part of the BaseAffix.
When Affixes are popped from the stack, the RepyNetworkAPIWrapper might be placed into the bottom of the stack when we have popped all the Affixes. This may cause a problem when attempting to peek(), as RepyNetworkAPIWrapper does not have the attribute peek().
One option would be to raise an error once we have reached the bottom of the stack. A second option would be to add in the attributes peek(), push() and pop() into the RepyNetworkAPIWrapper. Thoughts?
Currently if there are any internal AFFIX error when making a network call, the error percolates up to the application layer. However we want to prevent this from happening to make the AFFIX library semantically consistent with the network API. The internal errors should not be seen by the application.
Previously the AffixStack.copy() method was using the affix_string that was stored locally to create a new AffixStack object. However this would have had the issue of not copying any AFFIX components that would have been pushed on the stack or would have included any components that would have been popped().
Now the method has been changed so the copy() method uses the advertisement string to build a new stack. This would be more up to date as the advertisement string is freshly generated each time get_advertisement_string() is called.
However now the copy() method will no longer copy over AFFIX components that are one sided and does not advertise themselves. Examples of these kinds of AFFIXs are LogAffix, StatAffix, NoopAffix etc.
The question is do we want this to be the desired effect? Or is creating a new AffixStack from an AFFIX string the wrong way of implementing the copy() method?
Currently the CoordinationAffix uses both advertise_objects.repy and advertise.repy perform advertise_announce(). It should conform and use the same mechanics throughout. Once ticket: https://seattle.poly.edu/ticket/1387 has been addressed, it should use the new advertise objects to perform all advertise_announce and lookup.
This ticket may be invalid now. Opening up the ticket so we can confirm this.
When running utf on the file ut_shims_multipath_basetest.py from the repy_v2 branch, it hangs indefinitely utilizing 0% CPU. I ran the test as a python file, and I get this output often:
Launching server NodeA on port 36829
python : ---
At line:1 char:1
+ python ut_shims_multipath_basetest.py > stacktrace.txt 2>&1
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (---:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
Uncaught exception!
---
Following is a full traceback, and a user traceback.
The user traceback excludes non-user modules. The most recent call is displayed last.
Full debugging traceback:
"C:\Users\Leonard\v2seash\emultimer.py", line 115, in wrapped_func
"multipathtest_helper", line 26, in _server_helper
"shimstackinterface", line 76, in listenforconnection
"shim_stack", line 203, in __init__
"shim_stack", line 430, in make_shim_stack
"shim_stack", line 124, in find_and_register_shim
"dylink_code", line 457, in _dy_import_global
"dylink_code", line 327, in dylink_import_global
User traceback:
"multipathtest_helper", line 26, in _server_helper
"shimstackinterface", line 76, in listenforconnection
"shim_stack", line 203, in __init__
"shim_stack", line 430, in make_shim_stack
"shim_stack", line 124, in find_and_register_shim
"dylink_code", line 457, in _dy_import_global
"dylink_code", line 327, in dylink_import_global
Exception (with type 'exceptions.Exception'): Caught exception while initializing module (multipathshim)! Debug String: ---
Following is a full traceback, and a user traceback.
The user traceback excludes non-user modules. The most recent call is displayed last.
Full debugging traceback:
"dylink_code", line 324, in dylink_import_global
"dylink_code", line 521, in evaluate
"C:\Users\Leonard\v2seash\virtual_namespace.py", line 117, in evaluate
"C:\Users\Leonard\v2seash\safe.py", line 522, in safe_run
"multipathshim", line 31, in <module>
User traceback:
"dylink_code", line 324, in dylink_import_global
"dylink_code", line 521, in evaluate
"multipathshim", line 31, in <module>
Exception (with class 'ShimInternalError'): Unable to import msg_chunk_lib for MultiPathShim.Caught exception while initializing module (msg_chunk_lib)! Debug String:
---
Following is a full traceback, and a user traceback.
The user traceback excludes non-user modules. The most recent call is displayed last.
Full debugging traceback:
"dylink_code", line 324, in dylink_import_global
"dylink_code", line 504, in evaluate
"C:\Users\Leonard\v2seash\safe.py", line 657, in copy
"C:\Users\Leonard\v2seash\safe.py", line 588, in __init__
User traceback:
"dylink_code", line 324, in dylink_import_global
"dylink_code", line 504, in evaluate
Exception (with type 'exceptions.RuntimeError'): dictionary changed size during iteration
---
---
---
Launching server NodeB on port 35349
Otherwise, it will hang indefinitely after the main thread exits.
I am running on Windows 8.1, with Python 2.7.4. I found ticket #1018 to have a similar issue, but looking at the dependencies of the mentioned test file above, this seems to be valid RepyV2 so I think it is a different problem.
One of the TCP relays crashed when it encountered an exception it wasn't expecting. The client that tried to connect in searched for a server that hadn't registered (or had deregistered already), but before we could send him an error message, he hung up, tripping session_sendmessage
.
From the log:
[1406823633.8523] INFO: Incoming TCP connection request from '128.238.64.141:60693' for server '2a44808b9a0ebb76753ce33bd2ddd80d985496a6:1224'
---
Uncaught exception!
---
Following is a full traceback, and a user traceback.
The user traceback excludes non-user modules. The most recent call is displayed last.
Full debugging traceback:
"/home/albert/seattle-88b8-plus/seattle_repy/repyV2/emultimer.py", line 115, in wrapped_func
"tcp_relay.r2py", line 396, in _handle_client_request_helper
"session.r2py", line 102, in session_sendmessage
"session.r2py", line 84, in session_sendhelper
"/home/albert/seattle-88b8-plus/seattle_repy/repyV2/namespace.py", line 945, in __do_func_call
"/home/albert/seattle-88b8-plus/seattle_repy/repyV2/namespace.py", line 1207, in wrapped_function
"/home/albert/seattle-88b8-plus/seattle_repy/repyV2/emulcomm.py", line 1820, in send
User traceback:
"tcp_relay.r2py", line 396, in _handle_client_request_helper
"session.r2py", line 102, in session_sendmessage
"session.r2py", line 84, in session_sendhelper
Exception (with class 'exception_hierarchy.SocketClosedRemote'): The socket has been closed by the remote end!
---
Currently the method of copying the stack is to build a new stack using the existing AFFIX string. This was later changed to use the advertisement string (see #11 ). However that method will not copy over one sided or invisible AFFIX components. In order to do a real deepcopy of the stack, we need to traverse through all the components in the stack and individually copy them and push them onto a new stack.
Currently our RSAAffix has implementation for only TCP connection. We should at some point come up with a good way of implementing encryption for UDP network calls.
The current architecture allows us to import and directly use some of the existing Affix components in a Repy program. This saves quite some typing when you test and hack away at a new Affix. Protoyping functionality is just one dy_import_module
away.
At the same time, not all Affixes will work outside of an Affix stack. For instance, a decider Affix has no interesting function it can perform on itself (apart from diagnostics), it must instantiate other Affixes below it. Other things like synchronizing state should be easier inside an Affix stack too.
Do we want to put up with / encourage "stackless" usage for simple use cases? (This might deteriorate reusability, as contributors might not always have the stack in mind when building new Affix components.) Or should we try to prevent it? (This makes the initial learning curve steeper, as you need to grok the stack too.) Or shall with stick with what we have?
We need to create an installer for the AFFIX framework and find a good way to distribute the package. We may use distutils to help package everything up.
What is The Official Way to track connection/flow state using an Affix?
I have tried a few options for a UDP Affix I am writing (add item to the affix_context
dict, store attribute with Affix object instance, store as class attribute), but none worked -- class attributes are read-only due to safety restrictions in Repy; for instance attributes, the problem is that each UDP send causes a new Affix object to be created, and so the previously assigned attributes are not available.
Of course I can use a non-Affix related global variable for storing stuff, but this hurts by design.
If there is an issue while the NatPunchAffix is communicating with the NAT forwarder, the Affix may hang for a long time, which will be extremely problematic.
The current CoordinationAffix includes logic in openconnection
to account for the time it takes to look up a remote host's Affix stack, and subtract it from the timeout
arg to the next level's Affix component. The lookup can take longer than the timeout set by the caller, so it happens that the next component is called with a negative timeout, resulting in
Exception (with class 'exception_hierarchy.RepyArgumentError'): Must be non-negative.
(What a vague error message BTW.)
My suggestion is to disregard the time taken for lookups as it varies a lot, e.g. due to load on the advertise services, network conditions, or items being available through cachedadvertise
.
Thoughts?
Currently the Repy developers need to create AffixStackInterface object in order use the AFFIX framework. We need to make it easier for users to use the framework. The users should be able to write 2-3 lines of code at the top of their code and everything should be AFFIX enabled.
The code in the canilisten
library needs to be reviewed for potential missing error handling. I saw this recently:
Uncaught exception!
---
Following is a full traceback, and a user traceback.
The user traceback excludes non-user modules. The most recent call is displayed last.
Full debugging traceback:
"repyV2/repy.py", line 154, in execute_namespace_until_completion
"/home/poly_seattlenatforwarder/seattle/seattle_repy/repyV2/virtual_namespace.py", line 117, in evaluate
"/home/poly_seattlenatforwarder/seattle/seattle_repy/repyV2/safe.py", line 588, in safe_run
"dylink.r2py", line 546, in <module>
"dylink.r2py", line 407, in dylink_dispatch
"dylink.r2py", line 520, in evaluate
"/home/poly_seattlenatforwarder/seattle/seattle_repy/repyV2/virtual_namespace.py", line 117, in evaluate
"/home/poly_seattlenatforwarder/seattle/seattle_repy/repyV2/safe.py", line 588, in safe_run
"tcp_relay.r2py", line 672, in <module>
"canilisten.r2py", line 381, in check_specific_port
"canilisten.r2py", line 424, in check_port_with_specific_server
"canilisten.r2py", line 529, in request_connection_and_get_apparent_address
"session.r2py", line 31, in session_recvmessage
"sockettimeout.r2py", line 117, in recv
User traceback:
"dylink.r2py", line 546, in <module>
"dylink.r2py", line 407, in dylink_dispatch
"dylink.r2py", line 520, in evaluate
"tcp_relay.r2py", line 672, in <module>
"canilisten.r2py", line 381, in check_specific_port
"canilisten.r2py", line 424, in check_port_with_specific_server
"canilisten.r2py", line 529, in request_connection_and_get_apparent_address
"session.r2py", line 31, in session_recvmessage
"sockettimeout.r2py", line 117, in recv
Exception (with class '.SocketTimeoutError'): recv() timed out!!
---
I ran into an issue of how to name the connection of a branch stack when the branch stack uses the same ip/port. An example of this would be, imagine you have a branching AFFIX AffixA, and the first branch consists of (CoordinationAffix)(AffixB) and second branch consists of (CoordinationAffix)(AffixC). Now if the application calls listenforconnection(myip, 12345), than one of the branches (lets say the one containing AffixB) should use port 12345 and the other branch will use a different port number. Now the CoordinationAffix from the root would already have advertised AffixA under the key 'myip:12345'. When the CoordinationAffix on the branch using port 12345 tries to advertise under the key 'myip:12345', there will be a collision.
One option would be to have both the branches use a different port number, but this would mean that the original port number that was provided by the application would not be used (unless we just open up a socket without using it).
The other option would be for the application to provide three unique names. One for the root, and one for each of the branches.
Here is how the example AFFIX composition looks like.
parse_affix_string() needs to be changed to reject various AFFIX strings that were acceptable before. For example, previously an AFFIX stack was allowed to have other AFFIX components underneath branching/splitter AFFIXs. This is no longer allowed in our new design.
When we get down to the last layer of the AFFIX stack and have reached the RepyNetworkAPI, should we convert the localip/destip arguments for calls such as openconnection/listenforconnection/sendmessage/listenformessage to the actual ip address using gethostbyname()? The RepyV2 API does not accept dns names as arguments for localip/destip, however there is nothing stopping us from translating a dns name to it's ip address in our RepyNetworkAPI layer. This allows users to use dns names to connect/listen with when using the AFFIX framework (even when using just NoopAffix). Is this a good or bad idea?
There should be a way to dynamically enable and disable logging output on Affix components/stacks. Ideally, we would want to be able to do this separately for different deployments (production, beta, ...) and possibly single nodes.
The advertise service would be a logical place to put this information on, ideally in a signed format.
(Note: This was first documented as #1411 on the Seattle Trac.)
In our implementation currently, only the Coordination AFFIX is responsible for the name. We provide the name/locator as one of the arguments for the Coordination AFFIX. This value does not get passed on to the other AFFIX components. Is this the correct way of doing it?
Furthermore, when we create an AFFIX stack through the AffixStackInterface, we provide an argument for the virtual_host_name which is used when the AFFIX stack is created. However this argument is not used for any other purpose other than AffixStack.gethostname() that returns this value.
The reason I originally did not pass the virtual_host_name to individual components is to ensure that the components did not change the name and return an ambiguous value when the method getmyip() is invoked.
Any thoughts on what should be done here? Should the virtual_host_name provided by the application be made available to all the AFFIX components or just to the AFFIX stack? If we do not make it available to every AFFIX component than a name will need to be passed down to decider and branching/splitter AFFIXs such as Coordination AFFIX and Legacy AFFIX.
The existing code base for AFFIX currently resides in the branch directory of the project Seattle (https://seattle.poly.edu). The code needs to be migrated over to GitHub.
It seems that the advertisement thread on the Nat Forwarder sometimes dies silently without any notification. There are a few forwarders that are still up and running, however their IP address is no longer being advertised under the forwarder key.
We need a sane way to detect if the advertisement thread dies and to restart it, if it does.
affixpythoninterface.py is an interface that allows user to use the AFFIX framework from python with ease. affixpythoninterface by default overwrites the Repy network API with functions that use AFFIX with the default AFFIX string of CoordinationAffix. However setting a new string does not seem to update the new network calls to use a new AFFIX stack built from the new string.
If the file containing an Affix component class is not named class name (in lower case) + .r2py
, then calling affix_stack.find_and_register_affix
might succeed importing, but fail registering internally the component, and throw a rather unspecific KeyError
.
In the code leading to the traceback below,
transparenttcprelayaffix.r2py
defines a class Transparent_TCPRelayAffix
;TransparentTCPRelayAffix
(which yields the filename when .lower
ed, but is not the class name), andaffix_stack.find_and_register_affix
tries to copy the Affix's _context
, that context (named after the class) doesn't exist, resulting in the KeyError
.It would be a good idea to check for this mismatch, and raise a more descriptive error message. (I assume a developer running unit tests would catch it very early on in the development, and trivially correct this mistake).
Uncaught exception!
---
Following is a full traceback, and a user traceback.
The user traceback excludes non-user modules. The most recent call is displayed last.
Full debugging traceback:
"repy.py", line 154, in execute_namespace_until_completion
"/private/tmp/affix/RUNNABLE/virtual_namespace.py", line 117, in evaluate
"/private/tmp/affix/RUNNABLE/safe.py", line 588, in safe_run
"dylink.r2py", line 546, in <module>
"dylink.r2py", line 407, in dylink_dispatch
"dylink.r2py", line 520, in evaluate
"/private/tmp/affix/RUNNABLE/virtual_namespace.py", line 117, in evaluate
"/private/tmp/affix/RUNNABLE/safe.py", line 588, in safe_run
"test-ttr.r2py", line 5, in <module>
"affix_stack.r2py", line 32, in __init__
"affix_stack.r2py", line 133, in build_stack
"affix_stack.r2py", line 218, in find_and_register_affix
"/private/tmp/affix/RUNNABLE/safe.py", line 678, in __getitem__
User traceback:
"dylink.r2py", line 546, in <module>
"dylink.r2py", line 407, in dylink_dispatch
"dylink.r2py", line 520, in evaluate
"test-ttr.r2py", line 5, in <module>
"affix_stack.r2py", line 32, in __init__
"affix_stack.r2py", line 133, in build_stack
"affix_stack.r2py", line 218, in find_and_register_affix
Exception (with type 'exceptions.KeyError'): 'Transparent_TCPRelayAffix'
While looking through the log files of the NAT forwarder, I found the following error message/notifications in the files.
[1401575103.8258] DEBUG: Error in connection establishment with 218.6.135.166:17022: <type 'exceptions.ValueError'> Incorrect value in message size header. Found character 'E' in the header.
[1401586382.5270] INFO: Incoming connection from '218.6.135.166:63015'
[1401586382.5273] INFO: Got connection from 218.6.135.166:63015
[1401586382.5281] DEBUG: Error in connection establishment with 218.6.135.166:63015: <type 'exceptions.ValueError'> Incorrect value in message size header. Found character '^B' in the header.
[1401586382.9075] INFO: Incoming connection from '218.6.135.166:26212'
[1401586382.9078] INFO: Got connection from 218.6.135.166:26212
[1401586382.9086] DEBUG: Error in connection establishment with 218.6.135.166:26212: <type 'exceptions.ValueError'> Incorrect value in message size header. Found character '^A' in the header.
[1401586634.9625] INFO: Incoming connection from '218.6.135.166:60393'
[1401586634.9628] INFO: Got connection from 218.6.135.166:60393
[1401586634.9636] DEBUG: Error in connection establishment with 218.6.135.166:60393: <type 'exceptions.ValueError'> Incorrect value in message size header. Found character '^B' in the header.
[1401586635.3432] INFO: Incoming connection from '218.6.135.166:20272'
[1401586635.3434] INFO: Got connection from 218.6.135.166:20272
[1401586635.3442] DEBUG: Error in connection establishment with 218.6.135.166:20272: <type 'exceptions.ValueError'> Incorrect value in message size header. Found character '^A' in the header.
I suspect that this may possibly be an issue with the session library.
Here's a problem I saw on my MacBook Pro 10.6.8 when my node when offline and then online a couple of times. CoordinationAffix doesn't remove the components below it before it before it builds a "new" stack. In the example below we thus have multiple TCPRelayAffixes, most of them dead.
CoordinationAffix: Advertised c2a76dc744eb402b89ea5ff625e86456ef228d60,1224,TCP (TCPRelayAffix,131.130.125.5:63140)(TCPRelayAffix,131.130.125.5:63150)(TCPRelayAffix,131.130.125.5:63160)(TCPRelayAffix,131.130.125.5:63150)(NamingAndResolverAffix)
Note: I haven't seen this behavior on Android yet.
The current implementation of BaseAffix has the push
method insert an Affix object in a stack (thereby pushing down the Affix objects below it one slot further). This is done by modifying the pushing Affix component's and the inserted one's next_affix
references so that
like inserting an element in a single-linked list. See https://github.com/SeattleTestbed/affix/blob/master/components/baseaffix.r2py#L175-L180.
I think that the push
method should also be able to push Affix stacks, i.e. components with already set-up interlinking. Apart from potential race conditions, you could implement similar functionality by parsing the desired Affix stack string manually and push
ing multiple times, but why reimplement functionality that already exists? affix_stack
has a build_stack
method for the first part, and BaseAffix.push
can be amended to set the correct references (the current Affix's, and the new stack's bottommost) by adding a bit of stack inspection.
(I'll add a practical example where this is really useful in a bit.)
The original name for our project was Shim and our code as many reference to Shims. We need to replace all of these references to Affixs.
It seems like a good idea to build Affix components that have logging capabilities. I recommend having an optional argument upon initialization that specifies the file to write log information to. If blank, do not log.
Comments / feedback?
When the listenforconnection() is invoked in the NatPunchAffix, the Affix needs to communicate with a forwarder to establish a connection. The Affix however may raise SocketClosedRemote error during this communication. The SocketClosedRemote error is not expected of the listenforconnection() call and needs to be translated/handled in some other way.
Currently, the Coordination service assumes the last string returned by the advertise library is the most recent, and loads that one. This is not necessarily the case, so the Coordination service sometimes loads an out-of-stack Affix configuration.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.